Abstract

Fiber Bragg Grating (FBG) sensors have been increasingly used in the field of Structural Health Monitoring (SHM) in recent years. In this paper, we proposed an impact localization algorithm based on the Empirical Mode Decomposition (EMD) and Particle Swarm Optimization-Support Vector Machine (PSO-SVM) to achieve better localization accuracy for the FBG-embedded plate. In our method, EMD is used to extract the features of FBG signals, and PSO-SVM is then applied to automatically train a classification model for the impact localization. Meanwhile, an impact monitoring system for the FBG-embedded composites has been established to actually validate our algorithm. Moreover, the relationship between the localization accuracy and the distance from impact to the nearest sensor has also been studied. Results suggest that the localization accuracy keeps increasing and is satisfactory, ranging from 93.89% to 97.14%, on our experimental conditions with the decrease of the distance. This article reports an effective and easy-implementing method for FBG signal processing on SHM systems of the composites.

1. Introduction

Structural Health Monitoring (SHM) is a new kind of nondestructive testing (NDT) techniques that allow real-time assessment of structural integrity and durability and hence provide early warning for structure safety [1, 2]. SHM has recently gained growing interests because of its high reliability and low cost that meets the increasing requirement of many applications, especially in the field of aerospace and civil engineering [14]. In SHM applications, data acquisition system plays an important role. Compared with conventional sensors, FBG sensor has many advantages such as their small size, corrosion resistance, immunity to electromagnetic interference, and good compatibility with many composite materials [58]. Hence, in our work, FBG sensors have been selected and buried in the composite plate to collect signal and monitor structure performance. However, FBG sensors, especially commercial ones, still have disadvantages such as their relatively low sampling rate due to fiber grating demodulation technology, which may lead to accuracy error and signal aliasing [9, 10]. Thus, we should take this defect into consideration when building FBG-based SHM systems.

Impact localization, which occupies an important part in SHM research, can help to identify critical parts or regions in structures. With this technique, we can determine the necessity of further examination at specific area before offline inspection, thus reducing the inspection cost while maintaining the reliability and avoiding sudden failure [11, 12]. Many algorithms have been proposed in this field, especially dealing with the case when the impact event cannot be observed directly. Generally speaking, there are mainly two kinds of algorithms that are widely accepted and described as follows.

First is the passive monitoring method. In this kind of method, the sensors passively receive the signal and we only use the features of this signal to directly locate the impact. In the passive method, these three features, TDOA (Time Difference Of Arrival), AOA (Angle Of Arrival), and RSSI (Received Signal Strength Indication), of the signal are most widely used. For the TDOA-based method, it is easy to implement and does not need specific sensor layout or extra computing power [11, 1315]. However, it requires low signal noise ratio (SNR) of the signal and high sampling rate to obtain the accurate time difference. Moreover, the propagation velocity profile of acoustic wave needs to be exactly determined a priori, which is often difficult in the composites. For the AOA-based method, the angles between each sensor cluster and the acoustic source are computed for localization; thus the propagation velocity profile of wave is not necessary. But special sensor layout and more sensors are required to compute the angles [11, 16, 17]. For the RSSI-based method, it is effective and does not need the accurate time difference or special sensor layout, but it necessitates a complex and nonlinear energy distribution and dissipation model, which is usually hard to obtain on composite materials [11, 18].

Second is the active monitoring method. In this kind of method, the sensor and the actuator are treated as a whole, which not only can receive signals but also can actively output signals. And the most common active method is the time reversal method. The main idea of time reversal method is to reverse the receiving signal and then output it in the sensor node; thus the signal will focus on the impact position [1921]. This method only needs few priors and sensors, but active sensors and accurate TDOA are indispensable.

Among all the methods mentioned above, none of them can be directly applied in our impact monitoring system for the FBG-embedded composite plate. For the relatively low sampling rate of FBG sensors, we cannot measure the TDOA precisely, so the TDOA-based method may cause significant errors. Second, since the FBG sensors are buried or wove in the plate, it is hard to place them flexibly to meet the special layout requirement of AOA-based method. Third, for RSSI-based method, the anisotropy of the composites results in a more complicated model, which is difficult to implement and generalize. And finally, the FBG sensor is a passive sensor and cannot output signals; thus the time reversal method cannot be implemented. Therefore, we proposed a passive impact localization method for the FBG-embedded composite plate in this paper to overcome the challenges of low sampling rate and fixed sensor layout.

The proposed method is mainly based on the artificial intelligence, which is to obtain the training data first and then use them to train the pattern recognition models. This is a common and new way to solve complex localization problems [22, 23]. In this paper, we divide the surface of plate into several areas with the aim of determining which area the impact source belongs to. The principles of this technology are as follows. Firstly, EMD method is used to obtain the features of signal. Since the low sampling rate of FBG signal, we prefer the energy features instead of time features. With EMD method, signals can be decomposed into several components. Thus we can reduce noise disturbance and obtain more information of energy, not only the energy of each signal but also the information of energy distribution. Secondly, PSO-SVM method is applied to train a classification model used for determining the impact area with the features gained before. SVM model is chosen because it is fast and has good performance with only a small number of samples, which makes the whole method easy to implement on actual system. By PSO algorithm, the parameters of SVM model can be selected automatically and precisely. Finally, the model is implemented and validated on the actual SHM system.

2. Theory and Methods

2.1. Theory of Empirical Mode Decomposition (EMD)

EMD is a signal analysis method that can process the nonlinear and nonstationary signal directly. Based on the time-scale characteristics of signal itself, EMD can adaptively construct specific basis functions for this signal, instead of using the preset and fixed basis functions as STFT (Short Time Fourier Transform) and WT (Wavelet Transform) do [24, 25]. Thus EMD method can be applied in the decomposition of almost all types of signals, especially nonlinear and nonstationary signals. EMD decomposes the signal into several vibration components as shown in where is the original signal, is the residue, and is the adaptive component, known as IMF (Intrinsic Mode Function). According to EMD theory, each IMF is independent of each other and the information carried by the signal is fully distributed in IMFs without any loss after the decomposition, which makes the information more easy and intuitive to be observed or extracted.

An IMF is defined to meet the following 2 requirements:(1)The number of extreme and the number of zero-crossings are equal or differ by one.(2)The upper and lower envelopes are symmetric (the mean value of them are zero at any point).

Based on the IMF definition, EMD computation can be achieved by the algorithm shown in Figure 1.

After EMD processing, signal is decomposed into the sum of several IMFs and a residue. The first IMF represents the highest frequency portion, and the residue is the lowest frequency part, also known as the trend. Among all the components, we can obtain the energy distribution of signal and use several carefully chosen IMFs to extract the signal features, thus reducing the noise influence, obtaining energy distribution features, and making our localization model more robust and accurate.

2.2. Theory of Support Vector Machine and Particle Swarm Optimization

SVM algorithm is a widely used identification method in the field of pattern recognition and can fit the nonlinear model at fast speed with small amount of samples with the use of the kernel-induced feature spaces [26]. For simplicity, let us first view a binary classification question and suppose the training set as inwhere is the -dimensional feature vector and is the corresponding label. With the training set , SVM tries to find out a hyperplane which linearly separates the data and maximizes the geometric margin. In general, experimental dataset is not usually linearly separable, so we should make the following two improvements to the SVM method. First is to introduce a slack variable for each sample. This variable makes misclassification permissible if paying a certain cost. Second is to map the feature vector to a high-dimensional space with a nonlinear function , thus making the classification approximately linear separable. Accordingly, the SVM optimization is expressed as where represents the hyperplane and is the penalty term that determines the tradeoff between geometric margin and misclassification. Usually, the dimension of mapping range is very high (even infinite); thus the explicit representation and computation of are difficult. Therefore, we introduce the concept of kernel function, defined as follows:

With the kernel function, we can greatly reduce computational complexity and improve algorithm speed. The most widely used kernel function in SVM is RBF (Radial Basis Function), shown in where is known as the width parameter that can control the radial range of RBF. Lagrange duality and kernel function can be used in the aforementioned optimization problem; thus we can get an equivalent optimization problem of shown in where is the Lagrange multiplier that can be solved by sequential minimal optimization (SMO) algorithm. Then the classification decision function of SVM is shown in

The kernel function is shown in (5) and is the subset of corresponding to the nonzero Lagrange multiplier vector .

According to (6), there are two parameters that need to preset when SVM method is applied, known as penalty term and width parameter . Improper parameters may cause enormous negative impacts on the accuracy of SVM method even with correct procedure. Exhaustive method is commonly used for conventional parameter selection with preset parameter range and search interval. However, it requires rich programming experience as well as repeated tests. PSO method may be a better alternative because it can automatically provide us with a better choice of parameters.

PSO is a stochastic global optimization technique inspired by social behavior of bird flocking [27]. This method utilizes particles to model the solutions in the search space and uses a number of particles to explore this space to search for optima by updating particles by their respective velocities. At each iteration in PSO, the velocity of every individual particle is adjusted according to 2 extremes, one is the best performance particle itself previously found and the other is the best performance the whole population once sought out. The standard update formula is shown in where is the position of th particle and is the velocity. and represent the best position of a single particle and the whole population relatively. , , and are the search parameters that need to be determined empirically. Compared to other population based optimization tools, the advantages of PSO include fewer parameters and no evolution operators such as crossover and mutation which makes it easy to implement [28, 29].

Combining SVM and PSO, we can train an SVM classification model automatically by selecting the best values of the SVM parameters simultaneously.

2.3. Overall Process of Signal Processing Method

The proposed algorithm in Figure 2 is mainly based on the combination of EMD and PSO-SVM. This method is used to process the data obtained by FBG and get the impact information. The whole algorithm has two main parts: the online part and the offline part. The offline part is mainly used to get the classification model with precollected data using EMD to extract the feature and PSO-SVM to train the model. The online model is mainly to apply the model to the actual impact monitoring. And the main procedure of our method is listed as follows:(1)Detecting the existence of impact event (online)(2)Extracting the impact feature and forming the feature vector with EMD method (offline/online)(3)Automatically selecting the appropriate parameters with PSO method (offline)(4)Training the final SVM model with proper parameters (offline).

Before feature extraction, we need to makes the mean of each signal to zero since the center wavelength of different FBG sensors is not the same. Then we use (9) to judge whether the impact event exists:where is the signal after the preprocessing, is the impact position, is the number of sensors, and is the number of impact tests. The physical meaning of (9) is the sum of signal energy obtained by all sensors changes with time at specific position and test. We assume that there is an impact event, when the amplitude of the signal power sum is greater than the preset threshold. Then we regard and store the 50 sampling points before and after the maximum as the sample of impact events.

Then EMD is used to extract the characteristics of impact event samples. The impact event at specified position and obtained by specified FBG is decomposed into IMFs with EMD. After decomposition, we empirically select part of IMFs to compose the feature vector thus reducing the noise effect as where is the index set of selected IMFs, means the energy of IMF, and represents the peak value of IMF. Then we finally get the feature vector of each sample of impact events. Note that before feeding the feature vector to train the SVM model, standardization is necessary to eliminate the influence of unit or scale, improving the speed and accuracy of the algorithm.

We use the RBF kernel function to train the SVM classifier since it is the most widely used and can map the original feature to the infinite dimension space. Then parameters should be carefully chosen by PSO method since they can influence the prediction accuracy of the model. Input dataset is divided into training set and test set to get the accuracy (or the fitness) of each particle until the PSO algorithm converges or meet the requirement.

Then we retrain the SVM classifier with proper parameters and get the final model. The model is applied in practical application to validate our algorithm.

3. Impact Monitoring Platform of Composites by FBG

An experiment platform of impact monitoring system is built to localize the impact event and verify our algorithm in this study, shown in Figure 3.

As shown, impact events are simulated by the hammer, which can set the impact energy to ensure the consistency of each test. Impact wave is obtained by the buried FBG sensors and then transmitted to the optical demodulation through fiber. The sampling rate of the whole system is 2 kHz which is determined by the optical demodulator. After demodulator, optical signal is converted into electrical signal and is received by the computer. Our impact position classifier is in the impact localization software on PC to be verified.

It can be seen from Figure 3 that there are a total of 9 FBG sensors buried in the test piece, and they are divided into three channels connecting to the optical demodulation, meaning that each channel contains 3 FBG sensors. In addition, the center wavelengths for the 3 FBG sensors in each channel are 1536.7 nm, 1541.1 nm, and 1546.4 nm, respectively. Moreover, it needs special attention that SM130 can achieve synchronous measurement; thus we treat the collected data as time-aligned and ignore the synchronization error. In this paper, the optical demodulator is SM130 from MOI and its key parameters are as follows:(i)Number of optical channels: 4(ii)Scanning frequency: 2 KHz(iii)Wavelength range: 1525–1565 nm(iv)Maximum capacity of sensor in each channel: 15(v)Data transmission interface: ethernet.

The specimen is shown in Figure 4. Figure 4(a) is the schematic of specimen and Figure 4(b) is the physical specimen (including the sensors’ layout). The specimen is a composites sheet with a size of 630 × 550 mm and supported by aluminum alloy all around. Figure 4 shows that the surface of specimen is divided into 90 small square regions with each size of 50 × 50 mm and total number of 90 and our purpose is to extract the region receiving impact. There are 9 FBG sensors buried in the composite plate, whose sampling rate is all 2 kHz. We arranged the FBG sensors on test specimen to monitor the damage signal, and the specific arrangement is shown in Figure 4. Note that the 9 sensors are evenly distributed with horizontal interval of 200 mm and vertical interval of 150 mm. But the whole sensor array is arranged in the upper part of the specimen, and this is for further study of the relationship between the mean distance and localization accuracy.

The impact monitoring platform was constructed for data set collection and algorithm validation. In data set collection procedure, we conduct impact test at every square region for 15 times and store signals obtained by all FBG sensors for further analysis. In algorithm validation procedure, the algorithm and classification model is programmed on the computer and random square region is impacted to observe whether the algorithm can distinguish the right impact area or not.

4. Impact Localization Algorithm Verification

4.1. Data Acquisition of Impact Event

For simplicity, we only show the signal obtained by FBG number 4 and number 2 during the data set collection at C5 position in Figure 5. The vertical axis represents the (normalized) center wavelength of the FBG signal which has linear correlation with the vertical strain of the board, and the horizontal axis is the time. It can be seen from Figure 5 that the vertical coordinate value of number 4 and number 2 signal is around 1536.7 nm and 1546.4 nm instead of zero and the center wavelength of signals differs from the FBG sensors. The peak number of signal FBG number 4 and number 2 also shows that the impact tests were carried out more than once and we randomly select 10 of them to form our data set. The detailed signals of one impact test obtained by 2 sensors at the same time are shown in the left part, respectively, and the signals are normalized. The detailed signals show that although the peak of the impact event signal can be observed by 2 sensors, the TDOA cannot be obtained due to the low sampling rate and noise. Thus we use EMD method to obtain the signal energy feature instead of time feature as the judgment basis to localize the impact event.

4.2. Feature Extraction of Impact Data

After making the mean of each signal zero, the data is decomposed into IMFs with the processing of EMD. Figure 6 shows one decomposition result of signal by FBG number 4 at C5. Compared between IMFs, 1st–3rd IMFs are the high-frequency components and contain the most and primary information of impact signal, while 4–6th IMFs are the low-frequency trend and contain little information. So 1st–3rd IMFs are selected to form the feature vectors.

The peak value and energy of each IMF are used to from the corresponding feature vectors which are then stored in the dataset. Obviously, each feature vector in the dataset contains more than 2 elements. For visualization, we randomly select 6 kinds of classes with 9 samples in each class and then map the feature vectors to a two-dimensional space with PCA method for each class. The result is shown in Figure 7. It can be seen from Figure 7 that 2-dimensional features of impact on different position can be roughly classified, meaning that the features obtained by EMD indeed contain impact position information but need proper classification method to be extracted.

4.3. Parameters Selection of SVM

After feature extraction, PSO method is used to automatically select the SVM parameters. The searching range of parameter of SVM is between 0 and 100 with step length of 0.01, and that of parameter is between 0 and 10 with step length of 0.001. Also, the number of particles is 20 and the maximum number of iterations is set to 50. The mutation probability of each particle at one iteration is 50% to avoid trapping in local optimum. We use the 3-fold CV accuracy as the fitness function which PSO method is trying to maximize. The curves of fitness changes with the number of iterations are shown in Figure 9, where the black curve represents the best accuracy and the blue curve is the average accuracy of the whole particle population.

Figure 8 demonstrates that the optimization process of PSO achieves convergence at around 18th iteration with the best 3-fold CV accuracy of 78.11%. And the output of SVM parameters is (2.28, 0.021).

4.4. Localization Algorithm Results

When the proper parameters are obtained, we train the SVM classification model on the training set. The 3-fold CV accuracy and the whole accuracy on the training set and test set are then calculated. And the model is implemented in a practical impact monitoring system to verify its location accuracy. Note that the maximum permissible error (MPE) of this system is 10 mm and the small square area is 5 × 5 mm, meaning that not only the right small region of the impact but also the small square area just next to it will be considered as a valid and successful location in our practical test, as shown in Figure 9(a). Then the relationship between the distance and accuracy is also tested. Figure 9(b) shows several impact localization test results and the whole results are listed in Table 1.

The result of test 1 in Table 1 shows that the model works very well on the training set with the accuracy of 99.67%. On the testing set, the accuracy of location at the just right area decreases to 82.78% due to the generalization error. But, in the experiment, the right area and regions around are all acceptable; thus the accuracy returns to 93.89% and is satisfactory. Meanwhile, the accuracy increases with the decrease of distance from impact position and sensors, with the training accuracy from 99.67% to 100%, the testing accuracy from 82.78% to 89.29%, and the experimental accuracy from 93.89% to 97.14%. The result indicates that there is a tradeoff between the accuracy and the distance (or number of FBG sensors) to make our model applicable and feasible to various SHM systems.

The result means that our proposed algorithm of combining EMD and PSO-SVM can effectively extract the information of impact position from undersampled signal. And, in certain range of distance, our proposed algorithm can maintain high localization accuracy.

5. Conclusion

In this paper, we have proposed a novel method of combining PSO-SVM and EMD together to localize the impact event on the composite plate with embedded FBG sensors. EMD method is used to extract impact information in the undersampled signal and PSO-SVM is used to automatically obtain a well-trained localization classification model. Then, a FBG-based impact monitoring system was established to have this method experimentally validated. Obtained results demonstrate that our method can provide us with high localization accuracy in spite of low FBG sampling rates. Further, the relationship between distance and accuracy has also been researched. The results show that, to a certain extent, localization accuracy of our method increases with the decrease of distance; meanwhile the performance is satisfactory.

It can be concluded that the proposed method can achieve the goal of precise localization for the composite plate with low sampling rate and buried FBG sensors. In the future, we will apply this method to complex structures and test its performance for improvement. Besides, more specific feature extraction method and more pertinent modification of classification algorithm will be investigated. Moreover, the relationship between the distance and localization accuracy is also a valuable aspect.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by National Natural Science Foundation of China through Grant no. 51375030.