Research Article  Open Access
Hai Guo, Jinghua Yin, Jingying Zhao, Yuanyuan Liu, Lei Yao, Xu Xia, "An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1", Advances in Materials Science and Engineering, vol. 2015, Article ID 205817, 9 pages, 2015. https://doi.org/10.1155/2015/205817
An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1
Abstract
An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM) can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA). So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine), NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.
1. Introduction
The composite material is composed of two or more different kinds of materials; meanwhile, it can also be turned into new performance material by physical and chemical methods. It has been an academic focus to detect and identify composite materials. Soares et al. from Portugal proposed the identification of material properties of composite plate specimens [1]. Cunha et al. put forward genetic algorithms of the identification of elastic constants of composite materials [2]. Leo et al. [3] proposed the identification of defective areas in composite materials by the Bivariate EMD Analysis of Ultrasound. Pattern recognition means that research pattern of automatic processing and identification can be done by computer with mathematical technique, which concludes statistic pattern recognition, syntactic pattern recognition, fuzzy pattern recognition, and neural network pattern recognition. Pattern recognition technology has been widely used in computer vision, biomedical image analysis, optical character recognition, natural language processing, speech recognition, handwriting recognition, biometrics, classification of documents, Internet search engine, credit score, topography, and several other fields.
In recent years, pattern recognition and intelligent computation have been used for analyses and identification of composite material in many researches, such as damage detection, performance prediction, and performance analysis of materials. Through using the thoughts of statistic pattern recognition, Hamdi et al. took the HilbertHuang transform for the acoustic emission (AE) signal as identification future; after that damage detection to the composites material can be achieved by the method of pattern classifications [4]. GonzálezCarrato et al. proposed a fault detection and diagnosis (FDD) method of macro fiber composite material, which is used in defect detection to macro fiber composite through combining the wavelet decomposition technique with statistic pattern recognition [5]. Farhidzadeh et al. proposed a cracking mode detection model of concrete structures by using pattern recognition, which is featured by acoustic emission signals and classified by support vector machines. According to the experiment results, it is possible to detect crack of concrete [6]. Dervilis et al. [7] used AutoAssociative Neural Network classifier of pattern recognition to do damage diagnosis on a wind turbine blade. Oskouei et al. adopted signal features of acoustic emissions (AEs) to make damage mechanisms on glass/polyester composites; it is effective to reduce the dimensionality of acoustic emissions (AEs) by the principal component analysis, where the AEs were classified after clustering by fuzzy means clustering. Yang et al. proposed an optimization method of composite polymer using support vector machine [8]. But fewer researches about the detection and identification of nanocomposite film have been done. The nanocomposite film owning both composite materials and nanomaterial characteristics has been one focus for the academic research.
A computer automatic detection technology using pattern recognition technology (called GABD) is put forward in this paper, which can realize the detection to element of nanocomposite film. After preparing pure PI film and nanocomposite film doped with inorganic nanoparticles, the surface images of scanning electron microscope for the films are obtained, and then the texture feature of film image is extracted by the grey level dependence matrix; finally the feature dimension reduction is finished by PCA. After that, the identification and classification are made between hybridization polyimide nanocomposite film and polyimide film through a strong classifier built by ten Decision Stumps by Adaboost M1. The GABD method can be used in damage detection of film and automatic detection of nanocomposite film element and other composite material elements.
2. Materials and Methods
2.1. The Preparation of Nanocomposite Film
In order to make the detection and identification of nanocomposite film, effectively, three typical films are selected including pure PI film, the film mixed with nanoparticles called PI/BaTiO_{3}, and the film mixed with two different shapes of nanoparticle called PI/MMT + TiO_{2}.
Polyimide matrix inorganic nanocomposite thin films were prepared by in situ polymerization method. The experimental materials are ODA, PMDA, DMAC, nanoparticle (TiO_{2}, BaTiO_{3}, MMT), C_{2}H_{6}O, and so on. Firstly, PMDA is put into the solution of ODA and DMAC to make sticky polyamic acid after which nanoparticle (TiO_{2}, BaTiO_{3}, MMT) is also put into it, separately. After paving membrane heat treatment and transform, three kinds of films are obtained which are PI/MMT + TiO_{2}, PI/BaTiO_{3}, and pure PI films. Figure 1 is the surface morphology of three films (1000x); therefore from Figure 1(a), it can be known that surfaces of films are smooth and glossy. Figure 1(b) is the image of particle with mosaic of a lot of polymerization. Figure 1(c) is polymer surface with sheet. So, it is very different for these three kinds of surface morphology of films.
(a) PI
(b) PI/BaTiO3
(c) PI/MMT + TiO2
2.2. Detection Model
Automatic detection as well as identification of nanocomposite film depends on the pattern recognition model, while feature extraction and classifier design are two most important factors for the model. The main process of the automatic detection and identification model of nanocomposite film concludes firstly, preparing films samples by in situ polymerization method; secondly, extracting grey level dependence matrix feature of SEM images by scanning surface morphology images; finally, reducing feature dimension reduction by using the PCA. Classification recognition was made by putting the dimension reduced feature vector into the Adaboost M1 classifier. The classification recognition of nanocomposite film is shown in Figure 2.
2.3. Gray Level Cooccurrence Matrix Feature
Texture feature has been widely applied in pattern recognition, remote sensing image, image retrieval, and other fields, which is a very important feature for image analyzing and processing. For the extracting of image texture, there are many methods, and the common ones include statistical analysis method, structure analysis method, and spectral analysis method. Gray level cooccurrence matrix is a kind of statistical analysis method, and it was firstly put forward by Haralick from the City University of New York and has been of great importance in the image texture analysis. So far, the paper which was called gray level cooccurrence matrix feature [9] and firstly proposed in 1973 has been cited 10827 times. The gray level cooccurrence matrix is built on the base of the second order conditional probability density function—calculating probability of the transferring of two pixels in specific direction and distance from one gray level to another. And then the information of direction, interval, and range ability would be given.
Haralick et al. put forward 14 kinds of texture quantification methods which are based on grey level dependence matrix.
(1) Angular second moment, ASM, is
The letter represents grey level, represents step length, represents angle, and angular second moment, ASM, represents the evenness texture size of the ray level distribution which is proportional to the fineness and gray distribution of the texture feature.
(2) Contrast is
(3) Correlation isHere
(4) Entropy is
(5) Variance is is the mean value of .
(6) Sum of average is
(7) Sum of variance isHere , . Variance is proportional to texture period.
(8) Inverse difference moment isInverse difference moment is proportional to the local regularity of the texture and it is also the metric of local change of image texture feature.
(9) Variance of difference isHere , . Variance of difference is proportional to the contrast between two pixels and it is equivalent to the gray level difference of adjacent pixels.
(10) Sum of entropy isHere , .
(11) Difference of entropy is
(12) Shadow of clustering is
(13) Prominence of clustering is
(14) Maximal probability is
2.4. PCA Dimensionality Reduction
Principal component analysis (PCA) [10] is a very effective method which is widely used in the pattern recognition, intelligent computing, and multivariate data analysis. Its core idea is to represent multiple variables in a few dimensions (preferably two) by rotation and to classify them, correspondingly.
If the process variable follows dimensional joint normal distribution, we can have the following transformation:where is the feature vector corresponding to , is the th pivot element, after transformation through formula 15, a set of measurements of controlled vector are calculated to be , which is called the score of measurements on the th pivot element, and the vector is called the load vector of th pivot element. The above method of solving pivot element, load vector, and score is just the PCA.
2.5. Adaboost M1 Classifier
Classifiers ensemble is the focus of research among machine learning, pattern recognition, and data mining, and the most typical one is Adaboost classifier. Adaboost classifier is developed on the base of boosting algorithm which turns the weak learning classifier into strong classifier by constructing some weak classifiers into strong classifiers. Schapire improves the boosting algorithm and puts forward Adaptive Boosting or Adaboost [11] for short. So far, the problem which Adaboost algorithm can solve mainly includes binary classification, Adaboost M1 which is multiclass and single tag and Adaboost M2 which is multiclass and multitag.
Adaboost M1 algorithm description is as in Algorithm 1.

The main idea of Adaboost M1 algorithm is to choose the best weak classifier in the flowing and give it a weight value; the flowing chart is shown in Figure 3. And represents a classifier which is generated each round.
3. Experimental Results and Analysis
3.1. Sample
Surface texture image magnified for a thousand times is obtained by scanning electron microscope to be as a sample of scanning electron microscopic image. Because of the limit of experimental conditions, every sample has only fifteen scanning electron microscopic images, while for the sake of better training classification, the screenshot is obtained randomly. Each of three kinds of films has 2000 samples so that 6000 samples in all are obtained as shown in Figure 4. In order to show the effects of texture classification, in this paper, no former treatment is done on these images. And then, the texture feature extraction of grey level dependence matrix and recognition program will be processed together by using MATLAB.
(a) PI/BaTiO3
(b) PI/MMT + TiO2
(c) PI
3.2. Evaluation of Classification Performance
TP (true positive) and TN (true negative) represent the positive and negative samples, respectively, under the right classification, while FP (false positive) and FN (false negative) represent those samples under the wrong classification. New definition can be obtained as follows:
Sometimes, precision and recall will be contradicted with each other where Measure is needed. The Measure is weighted harmonic mean for precision and recall and its definitions are as follows:
The ROC area is another measuring criterion of classification effect, and in ROC space, FP rate is abscissa while TP rate is ordinate. TP rate, precision, recall, Measure, and ROC area are found to be in positive correlation with classification performance, while there is negative correlation between FP rate and performance.
3.3. Result and Analysis
In order to get further validation of the GABD method in this paper, 10fold cross validation is used to train and recognize the film. Macbook Pro is used as hardware platform; CPU and memory are Intel I72640M and 16 G, respectively, in the experiment. All the feature extraction, training, and classification are programmed by the MATLAB 2012a.
In experiment 1, 10fold cross validation method is used to make training and recognition of the samples with Adaboost classifiers. The parameters can be shown as follows: numIterations = 10, seed = 1, and weight threshold = 100: each weak classifier uses the Decision Stump. After being trained, the weight values of ten Decision Stump classifiers are represented by as shown in Table 1. Using the 10fold cross validation method on the samples to test, the recognition results obtained are shown in Figures 2 and 3, among which class 1 is PI/BaTiO_{3}, class 2 is PI/MMT + TiO_{2} and class 3 is PI, respectively.

According to the data in Table 2, the recognition accuracy pure PI is 100% which means that 2000 samples are all classified, correctly. However, one PI/MMT + TiO_{2} sample is wrongly classified as PI/BaTiO_{3}, and 1872 PI/BaTiO_{3} is classified correctly, other 128 PI/BaTiO_{3} is wrongly classified. Distinguish degree among Pure PI, PI/BaTiO_{3} and PI/MMT + TiO_{2} are obvious. PI/BaTiO_{3} and PI/MMT + TiO_{2} are also easy to distinguish which has only 129 classified wrongly.

Table 3 shows the recognition results of the three kinds of samples on which 10fold cross validation method is used. It includes TP rate, FP rate, precision, recall, Measure, and ROC area of three kinds of films and their average recognition results. According to Figure 3, the TP rates of class 2 and class 3 are the highest, the TP rate of class 1 is the lowest, and the FP rate of class 1 and class 3 is 0, while the FP rate of class 2 is 0.032. The average TP rate of 10fold cross validation has reached 0.979, average precision is 0.980, and average Measure has reached 0.978. Thus, good effect of the classification is obtained.

Figure 5 is the analysis of the threshold curve of class (FP rate is abscissa; TP rate is ordinate): the ROC area of both of class 1 and class 2 is 0.981, while that of class 3 is 1. Thus it has been proved that classification effects of class 3 by this model are the best, while those of class 1 and class 2 are lower.
(a) Threshold curve of class 1
(b) Threshold curve of class 2
(c) Threshold curve of class 3
In experiment 2, the GABD method has been compared, under the same sample database, to the support vector machine (SVM) [12], BayesNet [13], multilayer perceptron (MLP) neural network [14], RBF neural network [15], and BP neural network [16] models, respectively, in order to further check the detection to the surface texture of polyimide matrix’s inorganic nanocomposite thin film. The comparison results of the classification performances of detection models are obtained, as shown in Table 4 and Figure 6. By analysing the data in Table 4 and Figure 6 we know that the correctly classified instances obtained by GABD reach 97.85%, higher than those obtained by SVM, BayesNet, MLP, BP, and RBF. The Measure obtained by our method is 0.978, which is the same as those obtained by the three neural network models, while the Measure by SVM and BayesNet is 0.972 and 0.964, respectively, indicating that classification effect of GABD is better than that of SVM and BayesNet. As for the time of model building, the GABD is built in 0.18 seconds, which is much faster than SVM (1.34 seconds), MLP (0.78 seconds), BP (5.06 seconds), and RBF (0.66 seconds), but slightly slower than BayesNet (0.05 seconds) for the reason that the GABD is built up by ten weak Decision Stump classifiers. In practical application, however, this slight difference of time can be safely ignored. Figure 7 shows the ROCs of the six classification models, with the coordinate represented by FP rate and coordinate by TP rate. It can be seen from the figure that the ROC of GABD locates at the top left corner, indicating that the GABD is the best in comprehensive classification effect. Based on the foregoing, the GABD method has certain advantages in both accuracy and speed of recognition when compared to other models. It is therefore more suitable for the detection to inorganic nanocomposite thin films.

4. Conclusion
An automatic detection method (GABD) based on the GLCM and the Adaboost M1 is put forward in this paper, which can be used to detect element of nanocomposite film. Under the condition of no future knowledge, the gray level cooccurrence matrix feature of surface images of film can be extracted using scanning electron microscope. And then, a strong classifier is built by ten decision tree classifiers, which can realize the automatic detection of nanocomposite film element.
After analyzing the experimental data, the following conclusion is obtained:(1)The detection and identification of pure and composite films doped with nanoparticles can be achieved by the GABD method, effectively. The precision can reach 1; it is better for the identification when one or two kinds of nanoparticles are doped.(2)Through using the GABD method, the precision of three kinds of films reaches 0.980, and recall reaches 0.979, so, it is possible to recognize and classify for the films.(3)Through the comparison experiment we know that the classification performance of GABD is better than that of SVM [12], BayesNet [13], multilayer perceptron (MLP) neural network [14], RBF neural network [15], and BP neural network [16] models under the same sample database. Moreover, it is superior to single classifiers with respect to the accuracy of nanocomposite film detection.(4)The GAB method does not need people having basic knowledge of nanomaterials; all the work of different film element recognition can be done by corresponding software and equipment. This method is thus of benefit for the production and preparation of nanocomposite film which is good for the industry of nanocomposite film by supplying it with information technology.
Inorganic nanocomposite thin film of polyimide matrix has been studied for only a dozen years both at home and abroad, and no studies on automatic detection have been reported yet. This paper attempts to study the automatic detection to such films by making use of textural features and pattern classification. The automatic detection of the doped nanoparticle components will be the focus on future study in which other texture features and classification methods are planned to increase the precision of automatic detection of nanocomposite film elements and components.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This paper has obtained the support of the National Natural Science Foundation of China (51077028, 51307046, and 60803096) and Heilongjiang Natural Science Foundation of China (A201006), National Basic Research Program of China (2012CB723308), Fund of the State Ethnic Affairs Commission of China, and the Fundamental Research Funds for the Central Universities. Authors also gratefully acknowledge the helpful comments and suggestions of the reviewers who improved the presentation.
References
 C. M. M. Soares, M. M. de Freitas, A. L. Araújo, and P. Pedersen, “Identification of material properties of composite plate specimens,” Composite Structures, vol. 25, no. 1–4, pp. 277–285, 1993. View at: Publisher Site  Google Scholar
 J. Cunha, S. Cogan, and C. Berthod, “Application of genetic algorithms for the identification of elastic constants of composite materials from dynamic tests,” International Journal for Numerical Methods in Engineering, vol. 45, no. 7, pp. 891–900, 1999. View at: Publisher Site  Google Scholar
 M. Leo, D. Looney, T. D'Orazio, and D. P. Mandic, “Identification of defective areas in composite materials by bivariate EMD analysis of ultrasound,” IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 1, pp. 221–232, 2012. View at: Publisher Site  Google Scholar
 S. E. Hamdi, A. le Duff, L. Simon, G. Plantier, A. Sourice, and M. Feuilloy, “Acoustic emission pattern recognition approach based on HilbertHuang transform for structural health monitoring in polymercomposite materials,” Applied Acoustics, vol. 74, no. 5, pp. 746–757, 2013. View at: Publisher Site  Google Scholar
 R. R. D. L. H. GonzálezCarrato, F. P. G. Márquez, V. Dimlaye, and D. RuizHernández, “Pattern recognition by wavelet transforms using macro fibre composites transducers,” Mechanical Systems and Signal Processing, vol. 48, no. 12, pp. 339–350, 2014. View at: Publisher Site  Google Scholar
 A. Farhidzadeh, A. C. Mpalaskas, T. E. Matikas, H. Farhidzadeh, and D. G. Aggelis, “Fracture mode identification in cementitious materials using supervised pattern recognition of acoustic emission features,” Construction and Building Materials, vol. 67, pp. 129–138, 2014. View at: Publisher Site  Google Scholar
 N. Dervilis, M. Choi, S. G. Taylor et al., “On damage diagnosis for a wind turbine blade using pattern recognition,” Journal of Sound and Vibration, vol. 333, no. 6, pp. 1833–1850, 2014. View at: Publisher Site  Google Scholar
 Z. Yang, Q. Yu, W. Dong, X. Gu, W. Qiao, and X. Liang, “Structure control classification and optimization model of hollow carbon nanosphere core polymer particle based on improved differential evolution support vector machine,” Applied Mathematical Modelling, vol. 37, no. 1213, pp. 7442–7451, 2013. View at: Publisher Site  Google Scholar
 R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973. View at: Publisher Site  Google Scholar
 A. R. Oskouei, H. Heidary, M. Ahmadi, and M. Farajpur, “Unsupervised acoustic emission data clustering for the analysis of damage mechanisms in glass/polyester composites,” Materials & Design, vol. 37, pp. 416–422, 2012. View at: Publisher Site  Google Scholar
 R. E. Schapire, “The convergence rate of AdaBoost,” in Proceedings of the 23rd Conference on Learning Theory, Haifa, Israel, 2010. View at: Google Scholar
 H. Guo, J. Yin, J. Zhao, Z. Huang, and Y. Pan, “Prediction of fatigue life of packaging EMC material based on RBFSVM,” International Journal of Materials and Product Technology, vol. 49, no. 1, pp. 5–17, 2014. View at: Publisher Site  Google Scholar
 O. Schulte, H. Khosravi, A. E. Kirkpatrick, T. Gao, and Y. Zhu, “Modelling relational statistics with Bayes nets,” Machine Learning, vol. 94, no. 1, pp. 105–125, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 N. Kucuk, S. R. Manohara, S. M. Hanagodimath, and L. Gerward, “Modeling of gamma ray energyabsorption buildup factors for thermoluminescent dosimetric materials using multilayer perceptron neural network: a comparative study,” Radiation Physics and Chemistry, vol. 86, pp. 10–22, 2013. View at: Publisher Site  Google Scholar
 M. Contreras, S. Nagarajaiah, and S. Narasimhan, “Real time detection of stiffness change using a radial basis function augmented observer formulation,” Smart Materials and Structures, vol. 20, no. 3, Article ID 035013, 2011. View at: Publisher Site  Google Scholar
 H.Q. Wang, “Improvement of the recognition probability about camouflage target based on BP neural network,” Spectroscopy and Spectral Analysis, vol. 30, no. 12, pp. 3316–3319, 2010 (Chinese). View at: Publisher Site  Google Scholar
Copyright
Copyright © 2015 Hai Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.