International Journal of Aerospace Engineering

International Journal of Aerospace Engineering / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 2052603 | https://doi.org/10.1155/2016/2052603

Jinhong Chen, Haoting Liu, Jingchen Zheng, Ming Lv, Beibei Yan, Xin Hu, Yun Gao, "Damage Degree Evaluation of Earthquake Area Using UAV Aerial Image", International Journal of Aerospace Engineering, vol. 2016, Article ID 2052603, 10 pages, 2016. https://doi.org/10.1155/2016/2052603

Damage Degree Evaluation of Earthquake Area Using UAV Aerial Image

Academic Editor: Nicolas Avdelidis
Received14 Feb 2016
Revised25 May 2016
Accepted05 Jun 2016
Published25 Jul 2016

Abstract

An Unmanned Aerial Vehicle (UAV) system and its aerial image analysis method are developed to evaluate the damage degree of earthquake area. Both the single-rotor and the six-rotor UAVs are used to capture the visible light image of ground targets. Five types of typical ground targets are considered for the damage degree evaluation: the building, the road, the mountain, the riverway, and the vegetation. When implementing the image analysis, first the Image Quality Evaluation Metrics (IQEMs), that is, the image contrast, the image blur, and the image noise, are used to assess the imaging definition. Second, once the image quality is qualified, the Gray Level Cooccurrence Matrix (GLCM) texture feature, the Tamura texture feature, and the Gabor wavelet texture feature are computed. Third, the Support Vector Machine (SVM) classifier is employed to evaluate the damage degree. Finally, a new damage degree evaluation (DDE) index is defined to assess the damage intensity of earthquake. Many experiment results have verified the correctness of proposed system and method.

1. Introduction

The damage degree evaluation (DDE) of nature disaster can provide the intuitionistic support information for the government and the disaster rescue departments [1]. Recently, the Unmanned Aerial Vehicle (UAV) system and its intelligent image analysis method begin to be used to collect the disaster images and make geographical interpretations of ground targets [2]. Due to its low cost and fast response speed, the advantage of UAV application is apparent; however, the disadvantage is also obvious: its intelligent data processing ability is still limited. The handicaps come from both the modelling complexity and the computation accuracy issues [3]. Figure 1 shows the aerial image samples of an earthquake: (a) is the image of dilapidated ground building and (b) is the image of landslide after earthquake. In Figure 1, the collapse can be understood by human easily, but how to describe the damage feature and how to assess the damage degree by a computational model are still problems.

Many efforts have been made to improve the usability of UAV based application system. In [4], the authors developed a UAV based system to realize the Global Change Observation Mission (GCOM); the multiangular spectral observation method and the simple BRF model were utilized to assist the information processing. In [5], a forest height estimation technique was proposed for the remote sensing application. The dual-baseline SAR tomographic technique was applied to the single-pass L-band PolInSAR data. In [6], the UAV was employed to realize the overhead power line inspection. The corresponding image processing algorithms were developed to solve the power line identification task under complex vegetation. The current difficulty of UAV application does not lie in the design of aviation platform but in the solution of data processing problem [7]. Thus it is necessary to develop a system and its data processing method to solve the image capture and analysis issues for the earthquake application.

In this paper, a UAV aerial image capture system and its analysis technique are presented. The visible light camera is installed in both the single-rotor and the six-rotor UAV systems. A software system can realize the interactive evaluation of aerial image. Five typical ground objects are considered for damage degree analysis. They include the building, the road, the mountain, the riverway, and the vegetation. When implementing the computation, first the image quality [8] of visible light camera is evaluated. The Image Quality Evaluation Metrics (IQEMs) include the image contrast, the image blur, and the image noise. Second, if the captured image passes the quality check, a texture analysis computation will be carried out. The image texture features include the Gray Level Cooccurrence Matrix (GLCM) features [9], the Tamura features [10], and the Gabor wavelet features [11]. Third, the Support Vector Machine (SVM) classifier [12] is used to evaluate the damage degree of earthquake area. A new assessment index, which is named as the DDE index, is defined to describe the damage intensity of earthquake.

The main contributions of this paper include the following: First, an integrated UAV application system for earthquake rescue is developed. Two kinds of UAVs are utilized to implement the information collection task; and an interactive intelligent image analysis software system is developed. Second, a new evaluation index, that is, the DDE index, is proposed to assess the damage level of earthquake.

In the following sections, first the hardware design of UAV system is presented. Second, the corresponding image analysis algorithms are introduced. Finally, some experiment results and discussions are given.

2. Hardware System Design Method

The hardware designs of UAV system are shown in Figure 2. In Figure 2, (a) is the photo of single-rotor UAV system and (b) is the photo of six-rotor UAV system. The single-rotor UAV is a kind of oil-driven system; its maximum flying time can be larger than 2.5 hours. Its flying height can approach thousands of meters. Because the payload of single-rotor UAV is larger than 25 kg, the large communication apparatus can be carried by it; as a result the aviation control distance of it can reach several kilometers. The single-rotor UAV also has a better antiwind performance. Differently, the six-rotor UAV is only a kind of battery-driven system; its hovering time is only about 20 minutes; and its hovering height is only hundreds of meters. The payload of the six-rotor UAV is less than 10 kg; thus its aviation control distance is only about one kilometer and a half. The single-rotor UAV can be used to implement the long duration and far distance flight task, while the six-rotor UAV system is only fit for performing the information collection task of short distance.

The visible light camera is utilized to record the ground image. Currently the CMOS sensor and the long focus lens are used in this camera. Its sensor size is about 1/1.8 inches and its focus length can be controlled from 15.6 mm to 500.0 mm. The largest detection distance of camera is about 5000 m. The visible light camera can get vivid imaging effect; however, it is also notorious for the poor output effect for the complex environment light. Thus the near infrared light filter [13] can be used to decrease the influence of environment light. The weight of camera is less than 5.0 kg. A two degrees of freedom aviation movement control platform [14] is used to tune the working poses of camera. This platform can realize the attitude regulations of yaw angle and pitch angle. The angle tuning scopes of them are from 0° to 360° and from 0° to 120°, respectively. The micro electro mechanical systems gyroscope [15] inside of that movement control platform can be used to assess its working state. Then its final control precision can be about ±0.1°.

Both the single-rotor UAV and the six-rotor UAV are utilized to accomplish the information collection task. Comparably speaking, the six-rotor UAV can hover in sky more stable than the single-rotor UAV because its multiple rotors can improve the flight stability of airframe. The low flight height and the good antiperformance of near ground wind make the six-rotor UAV be controlled by the user easily. However, due to the development limitation of battery technique, most of its commercial product can support hovering in sky only for about 20 minutes. Differently, the single-rotor UAV has a large airframe size in our application; it can get power supply by the oil-driven method. As a result the single-rotor UAV can stay in sky for a long duration. Considering the complexity and the diversity of calamity rescue task, both the six-rotor UAV and the single-rotor UAV should be equipped to the rescue team to implement different information collection tasks.

3. Proposed Damage Degree Evaluation Method

3.1. The Proposed Computational Flow Chart

The proposed computational flow chart is shown in Figure 3. First, a series of image of typical ground objects are collected by the UAV systems. Second, if these captured images above can pass the image quality check, they can be used to train a SVM classifier. Three IQEMs are used to assess the image quality. The image regions of five types of ground objects, that is, the Region of Interest (ROI) in aerial image, are selected by hand; then several image texture features can be computed. These texture features are used to train a SVM classifier. The supervising data of SVM are the damage degree evaluation results which are advised by the calamity rescue experts [16]. Third, when a disaster happens, the UVA system is employed to collect the new image of disaster area; the typical ground objects are selected and marked by hand; then the image texture features will be computed if those images can pass the image quality check. Finally, the trained SVM is used to implement the damage degree evaluation. The output of SVM or the output sum of SVMs is the DDE index.

3.2. The Image Quality Evaluation Metrics

It is well known that the image analysis result will not be reliable if the image quality is poor; thus the IQEMs should be used to evaluate the image quality firstly [17] before the further analysis of aerial image features. In this paper, the image contrast, the image blur, and the image noise are considered to analyze the image quality. The image contrast can reflect the region discrepancy between the image foreground and the image background. Its computation equation is shown by (1). The image blur can represent the edge definition of degenerated image details. The edge spread degree of edge points can be evaluated by that metric. Its computation method is shown in (2). The image noise degree shows the noise contaminated level of image data. For the sake of simpleness, the variance of sampled image region is used to evaluate the image noise degree. The random position sample method is computed in the original image. Its computation equation is shown by (3): where and are the maximum and the minimum gray values of the kth image block, is the set of image block, is the gray value of the th image block in position , is the width of edge-spread points and , is the means of the kth image block, is the th image intensity of the kth image block, and N and are the pixel numbers of the kth image block.

3.3. The Image Texture Analysis Methods

Different from the frequency domain or the structure based texture analysis method, the GLCM is a kind of statistics analysis based technique for the low level texture feature estimation [18]. Because the aerial image always has the big view field but the low details, the GLCM texture features can be used to evaluate the texture chaos degree of ground image. The spatial dependence character among pixels can be described. In this paper, five GLCM texture features are computed. They are the angular second moment index , the contrast index , the correlation index , the entropy index , and the inverse difference moment index . And their corresponding computation methods are shown by (4), (5), (6), (7), and (8), respectively. When calculating the GLCM texture features, the significance directions of 0°, 45°, 90°, and 135° are considered; the size of observation window is 4 × 4; the computation step and the gray level of GLCM are 1 and 4, respectively. Consider where is the GLCM of the original image, , , , and are the means and the variances of and , and here and .

The Tamura feature is a kind of human visual system [19] based computation method for texture analysis. The coarseness and the contrast metrics of Tamura feature are calculated in this paper. The coarseness describes the granularity of an image block while the contrast reflects the intensity architecture of image. Their corresponding computation methods are shown by (9), (10), and (11). The Gabor wavelet features are also considered in this paper. The Gabor wavelet is an evolution form of the classic Gaussian function. Both the scale and the orientation information can be controlled by the Gabor wavelet. After the implementation of Gabor wavelet transform, the mean square energy and the mean amplitude can be computed as the texture descriptors. Their computation methods are shown by (12), (13), and (14). For example, the scales of Gabor are set as 1, 2, 3, 4, and 5; the orientations of Gabor are 0, π/3, 2π/3, π, 4π/3, and 5π/3; then 30 Gabor filters can be gotten. Finally, the feature vector of Gabor wavelet can be shown by (15). Consider where and are the size of image block, , k maximizes the image energy in either directions, is the fourth moment of mean , is the variance, is the Gabor transform result of image , functions and compute the real part and the imaginary part of , K and are the scale numbers and the orientation numbers of Gabor, and and are the mean square energy and the mean amplitude features of Gabor.

3.4. The Damage Degree Evaluation Method

The SVM is utilized to evaluate the damage degree of ground targets. The LIBSVM in [20] is used for simulation computation. The radial basis kernel function [21] of SVM is shown by (16). When training SVM, the full training vector of image texture can be written by , ; however, as for different ground targets, the full training vector may not get the best classifier effect due to its redundancy or the useless information for different ground targets. As a result, a new training data organization method is shown in Table 1. Table 2 shows a supervising data example of the building DDE description. The supervising results come from the subjective evaluation opinion of calamity rescue expert. In this paper, the 5-degree classification method is considered for all the ground targets. Finally, the output of SVM or the output sum of SVM can be looked on as the DDE index; it can be used to assess the damage intensity of earthquake: where is the radial basis kernel function of SVM classifier and is a control parameter.


Number Training dataSupervising data

1GLCM, GLCM, GLCM, , , Damage degree of building
2GLCM, GLCM, GLCM, GLCM, , Damage degree of road
3GLCM, GLCM, GLCM, , , Damage degree of mountain
4GLCM, GLCM, GLCM, GLCM, , Damage degree of riverway
5GLCM, GLCM, GLCM, , , Damage degree of vegetation


Damage degree Damage degree description

1 Only few of the load-bearing components of building are damaged lightly
2 Few of the load-bearing components of building are damaged seriously
3 Some of the load-bearing components of building are damaged seriously
4 Much of the load-bearing components of building are damaged seriously
5 All the load-bearing components of building are damaged seriously

4. Experiments and Discussions

To test the correctness of proposed system and method, a series of simulation experiments are carried out. A corresponding software system is developed. Many actual aerial images are used to test the performance of proposed algorithm. All the data processing modules are implemented by Matlab and C code in a PC (2.4 GHz CPU and 3 GB RAM). Here the texture analysis module is developed by Matlab code and the left modules are written by C code.

4.1. The Image Quality Evaluation of Aerial Images

The image quality evaluation plays an important role in the data analysis of outdoor application. If the image quality is poor the succedent computation will be unreliable. Figure 4 shows the examples of captured aerial images: (a) is affected by the motion blur of UAV; (b) suffers from the mist; (c) and (d) are the quality qualified images. Table 3 shows the image quality evaluation results of Figure 4. Obviously, the images with poor quality degree cannot be used for the following computation. In this paper, more than 200 aerial images are accumulated currently. According to the computation of image quality, 32 images are labelled as the unqualified images. After the accumulation of a lot of image data, it can be found that the result distributions between the qualified and the unqualified data are distinct. For example, the distribution scope of the qualified image contrast metric is about ; the distribution scope of qualified image blur metric is about ; and the distribution scope of qualified image noise metric is about . The image will be regarded as the unqualified data even if only one evaluation metric does not locate in the qualified distribution scope.


Item Image contrast degreeImage blur degreeImage noise degree

(a) 0.08002.062842.31
(b) 0.11701.276140.83
(c) 0.43933.032430.02
(d) 0.39282.909232.16

4.2. The DDE Computation of Aerial Image

The final development target of aerial image analysis is to realize the geographical interpretations of the damaged ground targets. Figure 5 shows the typical ground target of an earthquake. In Figure 5, (a) is the image of the damaged building, (b) is the image of the damaged road, (c) is the image of the damaged mountain and vegetation, and (d) is the image of the damaged riverway. Table 4 shows the computation results of multiple texture features. The combination method of texture features comes from the definitions in Table 1. As for the GLCM, the results of 0° significance direction are given. From Figure 5 and Table 4, the multiple texture features can be used to represent the texture chaos degree of a damaged ground target. For example, the texture feature of the normal road will be different from that of the damaged road. The surface of damaged road always has lots of cracks or rocks which will disturb the smooth texture feature of normal road. In addition, the selection and the segmentation of these ground targets are accomplished all by hand, and different ground targets can be segmented from one aerial image; thus the computation reliability of proposed method can be guaranteed.


Item Ground target Computation results of image texture feature

(a) Building GLCM = 0.0243; GLCM = 0.1299; GLCM = 4.0655; = 47.1238, = 29.2877; and = []
(b) Road GLCM = 0.0934; GLCM = 0.6190; GLCM = 2.8277; GLCM = 0.2958; = 38.8204; and = []
(c) Mountain & vegetation GLCM = 0.0237; GLCM = 2.3316; GLCM = 0.1795; = 51.2806, = 23.2138; and = []
(d) Riverway GLCM = 0.1997; GLCM = 0.5015; GLCM = 2.3803; GLCM = 0.8168; = 11.2880; and = []

The SVM is used to implement the texture classification of the selected ground target. Although many other classifiers can also be used here, the SVM is employed because of its good classification performance and low request for training data. After some experiment analyses, it can be found that the minimum data requirement of training data is about 200 which can fulfill the current application requirement of user. Let us take the ground building as an example. Table 5 gives out the Correct Classification Ratio (CCR) analysis results of damaged building using different training conditions. In Table 5, three types of training data are tested: the single GLCM feature, the combination features of Tamura and Gabor, and the combination features of GLCM, Tamura, and Gabor. In this experiment the training data quantity is 200. Obviously, the third type can get the best classification effect. Another experiment which uses different training data quantities to train SVM is carried out. From Table 5, the larger the training data quantity is, the better the CCR computation effect would be. The similar results can also be gotten for the computation cases of other ground targets.


Number Feature combination typeCCR

1 GLCM feature≈64%
2 GLCM feature and Tamura feature≈73%
3 GLCM feature, Tamura feature, and Gabor feature≈88%

Number Training data quantityCCR

1 200≈87%
2 250≈91%
3 300≈93%
4350≈93%

A new DDE index is defined to describe the damage level of earthquake. The DDE index uses multiple texture features to represent the characters of ground target; it also employs SVM to classify the damage degree. Figure 6 shows image samples of different ground targets; Table 6 gives out the comparisons between the computation results of DDE and the subjective evaluation results of calamity rescue expert. In Figure 6, the aerial images of damaged building are shown by (a), (b), and (c), while the images of damaged road are shown by (d), (e), and (f). Obviously, the damaged degree of building or road becomes serious little by little from (a) to (c) or (d) to (f). In Table 6, the final damage degree evaluation results of Figure 6 are given; and the larger the numbers in Table 6 are, the higher the damage degree in that table is. From these results below, the computation results of proposed DDE index can have the similar assessment effects like the opinion of calamity rescue expert. And after using hundreds of test image to assess DDE, the correct evaluation ratio of the proposed method can be larger than 93% from the statistics point of view. Thus, if the training data are selected and organized carefully, the DDE index can replace expert to implement the automotive damage evaluation in future.


Item DDE indexExpert opinion

(a) 22
(b) 34
(c) 55
(d)11
(e)33
(f)44

4.3. Discussions

The IQEMs are utilized to analyze the imaging environment of UAV system in this paper. It is well known that the degeneration of image quality comes from the complex environment light, the camera noise, the poor sensor performance, or the optical path bug, and so forth. Although the high-definition camera has been utilized widely in recent years, these negative factors above still cannot be eliminated completely. In this paper, considering that the computation effect and efficiency, the image contrast, the image blur, and the image noise are calculated, they can basically reflect the imaging performance of the visible light camera. Both the subjective evaluation and the objective evaluation methods are used here to find the distribution regions of qualified image quality. First, the subjective evaluation of the accumulated image is implemented. The image data can be classified into different groups by the subjective opinion of user. Then the objective evaluation will be used to compute the IQEMs for the classification results above. Then it can be found that the qualified image quality and the unqualified image quality have different distribution regions.

An interactive image selection method is utilized in this paper. Currently, the realization of intelligent recognition and segmentation of different ground targets are still very difficult; no robust image processing algorithm can be used. Even some algorithms can work for some image cases; however their computation efficiency cannot be tolerant [22]. In addition, the experience based knowledge such as the earthquake rescue or the image feature description also cannot be implanted into a computation system easily. As a result, it is necessary to hand those image understanding and segmentation tasks to the system users themselves. Figure 7 shows the selection and the segmentation results of ground building. In Figure 7, (a) is the original aerial image of disaster area and (b) is the segmentation result of a ground building. The image edges in (b) are marked all by hand: the user can use a mouse or a touch panel to select the ROI and define the target type by a keyboard. Then the segmentation and the mark of ground target can be reliable.

In this paper, the multiple texture features, including the GLCM features, the Tamura features, and the Gabor wavelet features, are selected to represent the texture information of ground targets. The reason that those features are used in this paper consists in their good performance for texture feature descriptions. Many research works [23] have disclosed that the computation results of those features above can get the similar analysis effect like human visual system. For example, the Gabor wavelet features can have the similar distribution character like human ocular nerve system. And the Tamura features can also realize the texture analyses by several metrics, such as the coarseness, the contrast, the directionality, the line-likeness, and the roughness. As for the degree evaluation issue of earthquake damage, the evaluation ground truth comes from the subjective experience of human ocular and cognition system. Thus these texture features above can have a good computational effect in the proposed system.

The SVM is utilized to train the image texture features and assess the DDE of aerial image because of its good classification performance and the low request for training data. Obviously, other classifiers, such as the neural network classifier, or the Adaboost classifier [24], could also be used to solve the classification problem in this paper. The SVM is employed in this paper because of two reasons: First, the SVM is a mature pattern recognition tool for engineering application; many available codes and software tools can be used to develop the proposed system. Second, the classification precision of SVM is high enough for the discussed application. For example, the classification precision can approach 90% if the training data is only about 200. This classification precision is high enough for the DDE evaluation according to the system user. Thus it is not necessary to consider other classifier for the proposed application. In the future, other classifier tools can also be considered if they are needed. With the accumulation of actual aerial image for earthquake, the classification performance of classifier can be improved definitely.

The DDE index is a useful parameter which can be used to assess the earthquake damage degree, or the rescue significance degree. For example, in Figure 8, two sites A and B are destroyed by an earthquake. Both of them experience the same earthquake magnitude scale; however, the first site locates in a granite geological structure while the second site locates in a clay geological structure. Thus the surface buildings of the second site are damaged more seriously. In that situation, if the rescue teams use the UAV based system to analyze the damage level rather than only using the magnitude scale to evaluate the damage degree, they can make the correct decision to go to the second site to performance the rescue task. The decision making can be accomplished by the sum of multiple DDE indexes. For example, as for the first site its damage degrees of building, road, mountain, riverway, and vegetation are 3, 3, 1, 2, and 3; differently the damage degrees of those targets in the second site are 4, 4, 3, 5, and 3; thus the rescue team should go to the second site to carry out the calamity rescue due to its higher DDE sum: .

The calamity rescue is a complex problem which should consider both the technical factor and the social factor [25]. In many cases, the response time of decision-making has to be short; no enough time can be left for the rescue team to make a particular analysis of rescue plan. Generally speaking, the media attention and the masses consensus will influence the rescue priorities and procedures. The government department and the rescue team have to consider those factors above in many situations. The development target of the proposed software is to provide an objective evaluation method for the calamity rescue. If two damaged areas have the similar social conditions, such as the population, the politics influence, or the attention degree, this software can be used to provide an objective recommendation for the rescue priority. It can also provide explanations about the rescue rank only from the geographical situation assessment point of view. Thus the unpredictable law dissensions and some potential criticisms can be avoided to some extent.

5. Conclusion

A damage degree evaluation system and method of earthquake are proposed. Both the single-rotor UAV and the six-rotor UAV are used to implement the aviation photograph collection task. The visible light camera is used to record the ground target. Five typical ground targets, that is, the building, the road, the mountain, the riverway, and the vegetation, are selected for the damage degree analysis. The Image Quality Evaluation Metrics, including the image contrast, the image blur, and the image noise, are used to assess the imaging quality. Several image texture features, such as the GLCM features, the Tamura texture features, and the Gabor wavelet features, are used to describe the ground target characters. The SVM classifier is used to evaluate the earthquake damage degree. A new index, that is, the DDE index, is defined to assess the damage level of earthquake area. In the future, the computation module will be implanted into the hardware system; then this system can be used to implement fast disaster rescue and information analysis tasks.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant no. 61501016. The authors thank the National Remote Sensing Center of China for their offers of some earthquake images.

References

  1. S.-W. Chen, Y.-Z. Li, S.-Q. Xing, X.-S. Wang, and M. Sato, “Urban damage evaluation using polarimetric SAR data,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '14), pp. 2754–2757, IEEE, Quebec, Canada, July 2014. View at: Publisher Site | Google Scholar
  2. H. Liu, W. Wang, J. Zhen et al., “The design of air-space integrative calamity information analysis and rescue system,” in Proceedings of the Annual IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER '15), pp. 1997–2001, Shenyang, China, June 2015. View at: Google Scholar
  3. H. Sebai and A. Kourgli, “An adaptive CBIR system for remote sensed data,” in Proceedings of the 12th International Workshop on Content-Based Multimedia Indexing (CBMI '14), pp. 1–6, Klagenfurt, Austria, June 2014. View at: Publisher Site | Google Scholar
  4. Y. Honda and K. Kajiwara, “Overview of GCOM-C1/SGLI and validation,” in Proceedings of the 33rd IEEE International Geoscience and Remote Sensing Symposium (IGARSS '13), pp. 835–838, IEEE, Melbourne, Australia, July 2013. View at: Publisher Site | Google Scholar
  5. Y. Huang, Q. Zhang, M. Schwaebisch, M. Wei, and B. Mercer, “Forest height estimation using single-pass polarimetric SAR tomography at L-Band,” in Proceedings of the IEEE Geoscience and Remote Sensing Symposium (IGARSS '14), pp. 3358–3361, Quebec City, Canada, July 2014. View at: Publisher Site | Google Scholar
  6. J. I. Larrauri, G. Sorrosal, and M. Gonzalez, “Automatic system for overhead power line inspection using an unmanned aerial vehicle—RELIFO project,” in Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS '13), pp. 244–252, Atlanta, Ga, USA, May 2013. View at: Publisher Site | Google Scholar
  7. T. Moranduzzo, F. Melgani, M. L. Mekhalfi, Y. Bazi, and N. Alajlan, “Multiclass coarse analysis for UAV imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 12, pp. 6394–6406, 2015. View at: Publisher Site | Google Scholar
  8. H. Liu, W. Wang, J. Zhen et al., “Blind image quality evaluation metrics design for UAV application,” in Proceedings of the Annual IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER '15), pp. 293–297, Shenyang, China, June 2015. View at: Google Scholar
  9. M. A. Ferrer, J. F. Vargas, A. Morales, and A. Ordóñez, “Robustness of offline signature verification based on gray level features,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 3, pp. 966–977, 2012. View at: Publisher Site | Google Scholar
  10. L. Zong, L. Ying, and L. Daxiang, “A new texture feature extraction method for image retrieval,” in Proceedings of the 4th International Conference on Intelligent Control and Information Processing (ICICIP '13), pp. 482–486, Beijing, China, June 2013. View at: Publisher Site | Google Scholar
  11. P. Liu, H. Liu, J. Jin, and J. Li, “Water wave visualization simulation using feedback of image texture analysis,” Multimedia Tools and Applications, vol. 74, no. 19, pp. 8379–8400, 2015. View at: Publisher Site | Google Scholar
  12. J. A. Banados and K. J. Espinosa, “Optimizing support vector machine in classifying sentiments on product brands from Twitter,” in Proceedings of the 5th International Conference on Information, Intelligence, Systems and Applications (IISA '14), pp. 75–80, Chania, Greece, July 2014. View at: Publisher Site | Google Scholar
  13. X. Jia, J. Cui, D. Xue, and F. Pan, “Near infrared vein image acquisition system based on image quality assessment,” in Proceedings of the International Conference on Electronics, Communications and Control (ICECC '11), pp. 922–925, Ningbo, China, September 2011. View at: Publisher Site | Google Scholar
  14. L. Zhang, L. Yan, L. Meng, X. Li, and S. Huang, “The application study of helicopter airborne photoelectric stabilized pod in the high voltage power line inspection,” in Proceedings of the International Conference on Optoelectronics and Microelectronics (ICOM '12), pp. 232–235, IEEE, Changchun, China, August 2012. View at: Publisher Site | Google Scholar
  15. M. Saranya and S. Kalaiselvi, “A digital prototype of adaptive control MEMS gyroscope,” in Proceedings of the IEEE International Multi Conference on Automation, Computing, Control, Communication and Compressed Sensing (iMac4s '13), pp. 807–811, Kottayam, India, March 2013. View at: Publisher Site | Google Scholar
  16. J. Yang, J. Chen, H. Liu, and J. Zheng, “Comparison of two large earthquakes in China: the 2008 Sichuan Wenchuan Earthquake and the 2013 Sichuan Lushan Earthquake,” Natural Hazards, vol. 73, no. 2, pp. 1127–1136, 2014. View at: Publisher Site | Google Scholar
  17. H. Liu, J. Li, and H. Lu, “Interactive imaging definition evaluation without reference,” in Proceedings of the International Conference on Computer and Electrical Engineering (ICCEE '10), pp. v2-283–v2-286, Chengdu, China, December 2010. View at: Google Scholar
  18. F. R. Al-Osaimi, M. Bennamoun, and A. Mian, “Spatially optimized data-level fusion of texture and shape for face recognition,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 859–872, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  19. H. Zhao, Z. Xu, and P. Hong, “Performance evaluation for three classes of textural coarseness,” in Proceedings of the 2nd International Congress on Image and Signal Processing (CISP '09), pp. 1–4, Tianjin, China, October 2009. View at: Publisher Site | Google Scholar
  20. http://www.csie.ntu.edu.tw/~cjlin/libsvm/.
  21. K. S. Ettabaa, M. A. Hamdi, and R. B. Salem, “SVM for hyperspectral images classification based on 3D spectral signature,” in Proceedings of the 1st International Conference on Advanced Technologies for Signal and Image Processing (ATSIP '14), pp. 42–47, Sousse, Tunisia, March 2014. View at: Publisher Site | Google Scholar
  22. Y. Liu and Y. Yu, “Interactive image segmentation based on level sets of probabilities,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 2, pp. 202–213, 2012. View at: Publisher Site | Google Scholar
  23. A. Busch, W. W. Boles, and S. Sridharan, “Texture for script identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 11, pp. 1720–1732, 2005. View at: Publisher Site | Google Scholar
  24. H. Huo, Y. Ji, S. Wang, X. Kuang, and C. Yang, “The research on AdaBoost-BPNN model of point absorber wave energy converter,” in Proceedings of the 11th IEEE International Conference on Mechatronics and Automation (ICMA '14), pp. 1762–1766, Tianjin, China, August 2014. View at: Publisher Site | Google Scholar
  25. J. Yang, J. Chen, H. Liu, K. Zhang, W. Ren, and J. Zheng, “The Chinese national emergency medical rescue team response to the Sichuan Lushan earthquake,” Natural Hazards, vol. 69, no. 3, pp. 2263–2268, 2013. View at: Publisher Site | Google Scholar

Copyright © 2016 Jinhong Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1517
Downloads804
Citations

Related articles