Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article
Special Issue

Computational Intelligence Approaches to Robotics, Automation, and Control

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 393265 | 12 pages | https://doi.org/10.1155/2014/393265

An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition

Academic Editor: Yi Chen
Received10 Jan 2014
Revised17 Jul 2014
Accepted23 Jul 2014
Published31 Aug 2014

Abstract

We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA) and linear discriminant analysis (LDA). Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN) is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.

1. Introduction

Face recognition has become a topical and timely study focus in the fields of pattern recognition and computer vision for its wide application prospect [1, 2]. Feature extraction is the key element in face recognition. Currently, diverse recognition methods use different extraction strategies. And one of the most popular algorithms is principal component analysis algorithm (PCA), which aims to find the projected directions along with the minimum reconstructing error and then map the face dataset to a low-dimensional space spanned by those directions corresponding to the top eigenvalues [3, 4]. Traditional PCA face recognition technology can reach accuracy rate of 70%–92% [5]. However, it is still not fully practical.

PCA has certain limitations which result in bad adaptability in the image brightness and facial expression variety [69]. Under either strong bright light or weak light environments, the information of the features of the face is deficient; hence the structural information from the feature points of the face image may hardly be captured using traditional algorithms like PCA [10]. In addition, existing algorithms which are based on capturing single expressions make it difficult and challenging to capture the correct features of the same person if he changes his facial expressions. Traditional PCA fails to see the natural structure and correlation represented in data set [3], which leads to potential additional loss of compact and/or useful facial representations and will result in a higher reconstruction error rate [11].

There are many recognition proposals to address limitations of PCA presented above. In [12], Bansal and Chawla proposed normalized principal component analysis (NPCA) to improve the recognition rate. They normalized images to remove the lightening variations by applying SVD instead of eigenvalue decomposition. Pereira et al. [13] introduced a new technique which can reduce face dimensions called class-modular image principal component analysis (CMIPCA) to extract local and global information to reduce illumination effects, face expressions, and head-pos changes resulting in speed-up over PCA. In [14] Tsai showed an application of dimensionality reduction techniques, such as PCA, EM-PCA, multidimensional scaling, and locally linear embedding, to identity emotion of facial animations. But the application was not for realistic human faces.

In our method, we decided to complement some of these limitations of PCA by adopting the MPCA algorithm together with the LDA algorithm as the basis for the study [3, 15]. The MPCA algorithm disregards the traditional method which is based on two-dimensional data and uses instead vectors and integrates multiple face images into a high-dimensional tensor and processes data in tensor space. The advantage of this approach lies in its ability to persistently structure facial information images and consequently increases the accuracy rate when spatial relationships between pixels are considered. When the light brightness changes or facial expression changes, spatial structural information between pixels becomes particularly important.

LDA was adopted to further reduce the dimensions of samples processed by MPCA as it is capable of aggregating the samples in subspace and hence improving the face recognition rate [16, 17]. We combine MPCA and LDA to form LDA subspace, from which both MPCA features and LDA features can be extracted.

The organization of this paper is as follows. Our proposed algorithm will be discussed in Section 2. Methodology of the approach is presented in Section 3. To demonstrate the effectiveness of the proposed method, experimental results will be shown in Section 4. Finally conclusions are drawn in Section 5.

2. Principle of MPCA

In computer vision, most of the objects are naturally considered as th-order tensors () [18]. Take Figure 1 as an example; the image matrix in (i) is a 2nd-order tensor and a movie clip, while in (ii) it is a 3rd-order tensor. Traditional techniques for subspace dimensionality reduction such as PCA could transform image matrix to vectors with high dimensionality in one mode only, which cannot meet the need of dimensionality reduction. So such techniques are unable to handle multidimensional objects well and get satisfactory results. Therefore, in order to reduce dimensionality, a reduction algorithm which can directly operate on a high-order tensor object is desirable. Two-dimensional PCA (2DPCA) algorithm is proposed and developed, while researches are using dimensionality reduction solutions which represent facial images as matrices (2nd-order tensors) instead of vectors [1922]. However, 2DPCA can only project images in single mode, which results in bad dimensionality of reduction [3, 23]. Thus, a more efficient algorithm MPCA has been proposed to get better dimensionality reduction.

2.1. Tensor Notations and Definitions

Multilinear principal component analysis (MPCA) has been introduced in details in [3], which is used to solve the problem of gait recognition. Before describing MPCA, the notations will be shown in this paper.

Vector denotes 1st-order tensor. Matrix denotes 2nd-order tensor. denotes 3rd-order tensor. Higher-order tensors are indicated by . Assume image matrix is indicated by . Tensor space is indicated by .    indicates the orthonormal bases of vector space and indicates the orthonormal bases of vector space . Vector indicates orthonormal bases of tensor space . Image matrix equals

Define two matrices and . Assume indicate subspace of space , formed by basis vectors and . Then indicates subspace of tensor space . The result of 2nd-order tensor projected to is indicated by

Based on different objective functions, transformation matrices and can be obtained by iteration; hence dimension reduction can be achieved.

2.2. Principle of MPCA

MPCA is developed based on the PCA algorithm. Its advantage is that it operates on tensor, replacing the traditional algorithms which transform high-dimensional data into one-dimensional vector. For example, to process 100 face images with size , PCA treats them as a matrix while MPCA treat them as a tensor. MPCA have the advantage of taking into account correlation in the original data which is ignored by PCA.

Assume there are tensor sets of images ; a tensor object is denoted by ;    denotes dimensionality of -order tensor. Each tensor can be unfolded as

Here, denotes orthogonal matrix. So, [24]. Decompose this matrix; we can get

The key point of MPCA algorithm is to find a tensor subspace which can catch the variety of tensor objects and extract features of object. According to (4), projection of tensor samples onto tensor subspace is defined as where denotes tensor after projection. , . Figure 2 depicts the process.

As Figure 2 shows, by projecting each mode of facial tensor , low-dimensional facial tensor which satisfies maximum variance can be achieved.

For tensor objects of image samples, the variance before projection is as follows:

And the tensors after projection satisfy the following equation:

By combining (5) and (6), we can get the following equation:

The MPCA algorithm equals to the resolving optimization problem:

In (9), by using alternating-least-square method (ALS), we are able to calculate local optimization procedure. When solving the th projection matrix , other matrices were set constant; tensor is projected to tensor space , where .

Column of can be obtained from orthogonal basis of projection subspace. Sample in (8) is projected to lower dimensional tensor . , th-mode unfolding matrix of , is inputted to get PCA. It equals to

2.3. MPCA Algorithm

MPCA have managed to handle multidimensional objects. According to the above sections, pseudocode for the computation of the MPCA algorithm can be concluded [25] as shown in Figure 3.

Step 1. Input sample images and center them as .

Step 2. Obtain the total scatter matrix’s eigendecomposition.

Step 3. Calculate the eigenvectors and their corresponding most significant eigenvalues, and the result is output as .

Step 4. (i) Get .

(ii) Calculate .

(iii) For , (a) calculate the total scatter matrix’s eigenvectors and their corresponding most significant eigenvalues, and the result is output as , for ;(b)get and ;(c)if , then break the loop and go to Step 5.

Step 5. Finally calculate the feature matrix; see the following equation:

2.4. LDA Algorithm

LDA (linear discriminant analysis) projects image onto a lower-dimensional vector space to achieve maximum discrimination as follows.

Step 1. Compute the average sample values for different kinds of facial images in the original space. Total number is denoted by . denotes the th object of the th class of samples:

Step 2. Compute covariance matrix of each class:

Step 3. Compute within-class and between-class scatter matrices:

Step 4. Compute eigenvectors of matrix to get projection vectors. Then dimensionality reduction data can be obtained by projection [26, 27].

After dimensionality reduction using MPCA, the matrices are arranged in columns into vectors as inputs to the LDA algorithm. By using MPCA algorithm to reduce the dimension of the image, we not only solved the problem of singular matrix but also retained structure information in the images and thus improve the recognition rate.

2.5. KNN Algorithm

-nearest neighbor (KNN) algorithm [28, 29] is adopted for sample set classification here, and the concrete steps are as follows.

Step 1. Select different parameters of value.

Step 2. Adopt the method of cross-validation on training face images, for .

Step 3. Make the cross-validation error classification rate minimization and get its corresponding parameter .

Step 4. Construct a prediction model with .

3. Process of the Recognition Algorithm

3.1. Preprocessing

Image preprocessing and normalization are vital for face recognition systems as images are often affected by image quality, illumination, face rotation, facial expression [8, 30], and so forth. In order to offset above factors, it is necessary for us to carry out face normalization before facial feature extraction.

Our data is preprocessing normalized images with a resolution of . In our research, histogram equalization was applied (see (15)):

3.2. Dimensionality Reduction Using MPCA and Feature Matrix Extraction Using LDA

MPCA reduces dimensions of input face images and generates feature projection matrix [30] that are then taken as input samples to LDA. MPCA and LDA combination were used to construct LDA subspace, from which both MPCA features and LDA features can be extracted.

The detailed steps have been described in Sections 2.2 and 2.3.

3.3. Face Recognition Using L2 Distance Measure

We used resultant output acquired above as input samples for training and applied aforesaid techniques to get the feature matrix. Then we carried out a similarity measure on image samples. In our research, we choose L2 distance for measures (see (16)):

KNN classifier [31] is adopted for sample set classification here, while the procedure and details are introduced in Section 2.4.

The overall approach of face recognition proposed in this paper is shown in Figure 3.

4. Experiments

We evaluated the performance of our algorithm based on MPCA + LDA in this research and compared with the PCA, MPCA, and PCA + LDA algorithm by performing experiments on ORL databases [32]. In order to examine the ability of our method, we also try it on other classical face databases such as FERET and YALE.

The experiments were conducted with three groups. We choose part of images in each group for training, while the rest for testing. As the probabilities for each kind of facial samples are the same, then that equals 1 is set in LDA algorithm.

Initially, we tested how different parameters affect the recognition error rate and how classification result is affected by dimensionality using the MPCA, dimensionality after using the LDA and value of KNN algorithm. LDA algorithm requires dimension reduction not greater than the total number of samples minus 1, so dimension reduction . There are 10 samples in each category, so . Other parameters of MPCA are set to the optimized values.

4.1. Experiments on the ORL

The ORL face database contains a total of 400 images of 40 individuals (each individual has 10 gray scale images) [33]. Some photos are taken in different periods, and some are taken with the various countenances and the facial details. Each image is of a resolution of 256 grey levels per pixel [34]. Figure 4 shows image examples of two persons before preprocessing.

Now, images have been divided into three different groups. With the first group, we select the first 5 images of the first 20 persons as training data and the last 5 images of the first 20 persons as test samples for face identification. With the second group we select the first 5 images of the rest 20 persons as training data and the last 5 images of the rest 20 persons as testing samples. With the third group we select the first 5 images of 40 persons as training data and the last 5 images of 40 persons as testing samples.

Recognition error rate of PCA is shown in Figure 5.

Judging from the figure, when equals 1, the recognition error rate reaches minimal value. PCA recognition accuracy reaches 58%–82%.

Error rate of MPCA under different value is shown in Figure 6. As shown in the figure, when equals 1, error rate is minimal. MPCA recognition accuracy reaches 75%–85% in the experiments.

When equals 8, error rate of PCA + LDA algorithm reaches minimal value 7%. How different LDA dimension reduction affects recognition accuracy is shown in Figure 7.

We can see from Figures 5, 6, and 7 that dimension after LDA increases as the number of samples also increases. When applying PCA + LDA algorithm, we use MPCA to decrease dimension of facial samples to 11 and then use LDA. For the 1st group, reduce dimension to 7. For the 2nd group, reduce dimension to 8. For the 3rd group, reduce dimension to 10. We can conclude that the LDA algorithm is not satisfied with multidimensional objects. Accuracy of PCA + LDA reaches 86%–88%.

MPCA + LDA algorithm only produces higher error rate of 10%–25% when equals 10. In other situations, the recognition error rate is very low. When equals 8, the error rate of different LDA dimensionality reduction is shown in Figure 8. Algorithm recognition accuracy rate reaches a high value.

Result of the experiments on ORL database is shown in Figure 9. Take recognition accuracy of four algorithms for comparison; the combination of MPCA and LDA does result in better recognition performance than traditional methods.

4.2. Experiments on More Face Databases

We choose FERET and YALE for our experiments. Implement steps are similar to those in above section, so we just simplify steps and focus on the results.

FERET face database consists of a total of 1400 images of 200 individuals (each person has 7 different images). Figure 10 shows image examples of two persons before preprocessing.

YALE face database contains 165 images of 15 individuals; Figure 11 shows image examples of two persons before preprocessing [35].

The performance of PCA, MPCA, PCA plus LDA, and MPCA + LDA techniques is tested by varying the number of eigenvectors. We have chosen one group of result in each database for comparison.

PCA performed worse on YALE than on FERET because of the poor adaptability for the image brightness and facial expression, which is shown in Figure 12.

Though in Figure 13 MPCA performed much well on face recognition in YALE database, the process takes longer time than with PCA.

Figures 14 and 15 show that both PCA + LDA and MPCA + LDA can turn to high accuracy and low error rate in recognition. However, PCA + LDA effectively sees only the Euclidean structure, while MPCA + LDA successes to discover the underlying structure [36].

Compared against all the other algorithms, although with simple preprocessing, we can learn that MPCA + LDA has achieved best overall performance in both FERET and YALE databases.

5. Conclusions

This paper presents an algorithm for face recognition based on MPCA and LDA. As opposed to other traditional methods, our proposed algorithm treats data as multidimensional tensor and fully considers the spatial relationship. The advantage of our approach is of great relevance to applications and is capable of recognizing face dataset under different lighting conditions and with various facial expressions. LDA algorithm projects the data to a new space and has exact clustering result in our experiments. Compared with traditional face recognition algorithms, our proposed algorithm is not only a boost in recognition accuracy but also an unclogging of dimensionality bottlenecks and an efficient resolution of the small sample size problem. Future work of our research will include applying this approach on larger face databases such as on the CMU Multi-PIE, NIST’s FRGC, and MBGC.

Conflict of Interests

The authors declared that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to show an appreciation of reviewers’ insightful and constructive comments and would like to thank everyone for their hard work on this research. The work was supported by a Grant from the Ph.D. Programs Foundation of Ministry of Education of China (no. 20120141120006), Hubei Planning Project of Research and Development (no. 2011BAB035), and Wuhan Planning Project of Science and Technology (no. 2013010501010146).

References

  1. F. Song, H. Liu, D. Zhang, and J. Yang, “A highly scalable incremental facial feature extraction method,” Neurocomputing, vol. 71, no. 10-12, pp. 1883–1888, 2008. View at: Publisher Site | Google Scholar
  2. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: a literature survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399–458, 2003. View at: Publisher Site | Google Scholar
  3. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “MPCA: multilinear principal component analysis of tensor objects,” IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 18–39, 2008. View at: Publisher Site | Google Scholar
  4. S. Fernandes and J. Bala, “Performance analysis of PCA-based and LDA-based algorithms for face recognition,” International Journal of Signal Processing Systems, vol. 1, no. 1, pp. 1–6, 2013. View at: Google Scholar
  5. B. A. Draper, K. Baek, M. S. Bartlett, and J. R. Beveridge, “Recognizing faces with PCA and ICA,” Computer Vision and Image Understanding, vol. 91, no. 1-2, pp. 115–137, 2003. View at: Publisher Site | Google Scholar
  6. K. Choudhary and N. Goel, “A review on face recognition techniques,” in Proceedings of the International Conference on Communication and Electronics System Design, International Society for Optics and Photonics, 2013. View at: Google Scholar
  7. R. Gottumukkal and V. K. Asari, “An improved face recognition technique based on modular PCA approach,” Pattern Recognition Letters, vol. 25, no. 4, pp. 429–436, 2004. View at: Publisher Site | Google Scholar
  8. J. Li, B. Zhao, and H. Zhang, “Face recognition based on PCA and LDA combination feature extraction,” in Proceedings of the 1st International Conference on Information Science and Engineering (ICISE '09), pp. 1240–1243, IEEE, December 2009. View at: Publisher Site | Google Scholar
  9. P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004. View at: Publisher Site | Google Scholar
  10. X. R. L. Y. Tang Liang, “An face recognition technique based on discriminative common vector in PCA transform space,” Journal of Wuhan University, vol. 34, no. 4, 2009. View at: Google Scholar
  11. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “A survey of multilinear subspace learning for tensor data,” Pattern Recognition, vol. 44, no. 7, pp. 1540–1551, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. A. K. Bansal and P. Chawla, “Performance evaluation of face recognition using PCA and N-PCA,” International Journal of Computer Applications, vol. 76, no. 8, pp. 14–20, 2013. View at: Publisher Site | Google Scholar
  13. J. F. Pereira, R. M. Barreto, G. D. C. Cavalcanti, and T. I. Ren, “A robust feature extraction algorithm based on class-modular image principal component analysis for face verification,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '11), 2011. View at: Google Scholar
  14. F. S. Tsai, “Dimensionality reduction for computer facial animation,” Expert Systems with Applications, vol. 39, no. 5, pp. 4965–4971, 2012. View at: Publisher Site | Google Scholar
  15. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Uncorrelated multilinear discriminant analysis with regularization and aggregation for tensor object recognition,” IEEE Transactions on Neural Networks, vol. 20, no. 1, pp. 103–123, 2009. View at: Publisher Site | Google Scholar
  16. W. Zhao, R. Chellappa, and N. Nandhakumar, “Empirical performance analysis of linear discriminant classifier,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 164–169, IEEE, June 1998. View at: Google Scholar
  17. Y. Xie, “LDA algorithm and its application to face recognition,” Computer Engineering and Applications, vol. 46, no. 19, pp. 189–192, 2010. View at: Google Scholar
  18. S. Yan, D. Xu, Q. Yang, L. Zhang, and H. Zhang, “Multilinear discriminant analysis for face recognition,” IEEE Transactions on Image Processing, vol. 16, no. 1, pp. 212–220, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  19. D. Zhang and Z. Zhou, “(2D)2 PCA: two-directional two-dimensional PCA for efficient face representation and recognition,” Neurocomputing, vol. 69, no. 1–3, pp. 224–231, 2005. View at: Publisher Site | Google Scholar
  20. D. Zhang, X. You, P. Wang, S. N. Yanushkevich, and Y. Y. Tang, “Facial biometrics using nontensor product wavelet and 2d discriminant techniques,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 23, no. 3, pp. 521–543, 2009. View at: Publisher Site | Google Scholar
  21. J. Yang, D. Zhang, and A. F. Frangi, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131–137, 2004. View at: Publisher Site | Google Scholar
  22. Y. Li, H. Xie, and Y. Zhou, “Study of eyebrow recognition based on 2 DPCA,” Journal of Wuhan University, vol. 57, no. 6, pp. 517–522, 2011. View at: Google Scholar
  23. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Uncorrelated multilinear principal component analysis through successive variance maximization,” in Proceedings of the 25th International Conference on Machine Learning, pp. 616–623, July 2008. View at: Google Scholar
  24. L. De Lathauwer, B. De Moor, and J. Vandewalle, “On the best rank-1 and rank-R1R2,...,Rn approximation of higher-order tensors,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1324–1342, 2000. View at: Publisher Site | Google Scholar | MathSciNet
  25. C. Chen, S. Zhang, and Y. Chen, “Face recognition based on MPCA,” in Proceedings of the 2nd International Conference on Industrial Mechatronics and Automation (ICIMA '10), pp. 322–325, Wuhan, China, May 2010. View at: Publisher Site | Google Scholar
  26. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces versus fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997. View at: Publisher Site | Google Scholar
  27. X. Su, Q. Zeng, and X. Wang, “Several combination methods of face recognition based on PCA and LDA,” Computer Engineering and Design, vol. 33, no. 9, pp. 3574–3578, 2012. View at: Google Scholar
  28. G.-F. Lu, Y. J. Wang, and J. Zou, “Improved complete neighbourhood preserving embedding for face recognition,” IET Computer Vision, vol. 7, no. 1, pp. 71–79, 2013. View at: Publisher Site | Google Scholar
  29. E. Nasibov and C. Kandemir-Cavas, “Efficiency analysis of KNN and minimum distance-based classifiers in enzyme family prediction,” Computational Biology and Chemistry, vol. 33, no. 6, pp. 461–464, 2009. View at: Publisher Site | Google Scholar
  30. J. Shermina, “Face recognition system using multilinear principal component analysis and locality preserving projection,” in Proceedings of the IEEE GCC Conference and Exhibition (GCC '11), pp. 283–286, IEEE, February 2011. View at: Publisher Site | Google Scholar
  31. Y. Liaw, M. Leou, and C. Wu, “Fast exact k nearest neighbors search using an orthogonal search tree,” Pattern Recognition, vol. 43, no. 6, pp. 2351–2358, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  32. F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision, pp. 138–142, Sarasota, Fla, USA, December 1994. View at: Google Scholar
  33. Y. Jin and Q. Ruan, “Orthogonal locality sensitive discriminant analysis for face recognition,” Journal of Information Science and Engineering, vol. 25, no. 2, pp. 419–433, 2009. View at: Google Scholar
  34. P. P. Paul and M. Gavrilova, “Multimodal cancelable biometrics,” in Proceedings of the IEEE 11th International Conference on Cognitive Informatics & Cognitive Computing (ICCI '12), 2012. View at: Google Scholar
  35. P. Punitha and D. S. Guru, “Symbolic image indexing and retrieval by spatial similarity: an approach based on B-tree,” Pattern Recognition, vol. 41, no. 6, pp. 2068–2085, 2008. View at: Publisher Site | Google Scholar
  36. X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face recognition using Laplacianfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328–340, 2005. View at: Publisher Site | Google Scholar

Copyright © 2014 Jun Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1900 Views | 673 Downloads | 7 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.