Table of Contents Author Guidelines Submit a Manuscript
Wireless Communications and Mobile Computing
Volume 2018, Article ID 6934825, 8 pages
https://doi.org/10.1155/2018/6934825
Research Article

Pipeline Leak Aperture Recognition Based on Wavelet Packet Analysis and a Deep Belief Network with ICR

1School of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China
2School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
3School of Petrochemical Engineering, Liaoning Shihua University, Fushun 113001, China
4CNPC Northeast Refining & Chemical Engineering Co. Ltd. Shenyang Company, Shenyang 110167, China

Correspondence should be addressed to Zhiyong Hu; moc.361@420gnoyihzuh

Received 5 April 2018; Accepted 27 June 2018; Published 16 August 2018

Academic Editor: Houbing Song

Copyright © 2018 Xianming Lang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The leakage aperture cannot be easily identified, when an oil pipeline has small leaks. To address this issue, a leak aperture recognition method based on wavelet packet analysis (WPA) and a deep belief network (DBN) with independent component regression (ICR) is proposed. WPA is used to remove the noise in the collected sound velocity of the ultrasonic signal. Next, the denoised sound velocity of the ultrasonic signal is input into the deep belief network with independent component regression () to recognize different leak apertures. Because the optimization of the weights of the DBN with the gradient leads to a local optimum and a slow learning rate, ICR is used to replace the gradient fine-tuning method in conventional DBN for improving the classification accuracy, and a Lyapunov function is constructed to prove the convergence of the learning process. By analyzing the acquired ultrasonic sound velocity of different leak apertures, the results show that the proposed method can quickly and effectively identify different leakage apertures.

1. Introduction

Because of aging pipelines, corrosion, or welding defects, small leaks and slow leaks occur frequently; such leaks represent risks to the environment and can cause financial losses [14]. The pressure drop produced by a small leakage is low and difficult to detect; however, small leak and slow leak are main forms of leakage of long distance oil pipelines during the service period. Determining how to identify the small leakage aperture of the pipeline in time has become a popular topic of study in the management of the integrity of a pipeline [5, 6]. Thus, it is important to estimate the leakage aperture, to assist in the development of the pipeline repair plan and in the evaluation of the leakage area.

In a Kneser liquid, the transmission speed of an ultrasonic wave changes as the liquid pressure changes at a definite temperature [7]. In this study, the sound velocity of an ultrasonic signal of a pipeline is used to identify different leakage apertures. In practical engineering works, the general signals collected by data acquisition equipment include noises in the external environment, and the purpose of signal denoising is to distinguish the high-frequency signals from the interference caused by the high-frequency noise and to remove the high-frequency interference signal, as well as retain the useful information in the signal. The weak signal must be guaranteed to not be filtered out in the process of signal analysis, whereas the original signal can be reduced as much as possible. Wavelet packet analysis (WPA) presents a powerful ability for denoising a signal from the measured signals [8]. However, this method requires selection of a suitable wavelet basis function to achieve better denoising effect. When the measured signals are decomposed using empirical mode decomposition (EMD), because of the influence of the end effect, it is very difficult to accurately reconstruct the signal [9]. The local mean decomposition (LMD) method is more relaxed than EMD in terms of decomposition conditions, as the end effect is reduced and the over envelope phenomenon is avoided in the decomposition process [6, 10]. However, the end effect still affects the signal reconstruction. Recently, variable mode decomposition (VMD) has been proposed as an adaptive signal decomposition method, which is an entirely nonrecursive signal decomposition method. VMD not only has a good separation effect of signal and noise, but can also effectively suppress modal aliasing; nevertheless, the end effect also affects the signal reconstruction [11, 12].

Theoretically, a deep belief network (DBN) [13] is composed of multiple restricted Boltzmann machines (RBMs), in contrast with a shallow learning model, such as ANN and SVM. Recently, the DBN is becoming a useful tool for classification [14, 15]. The most significant difference between deep learning methods and shallow learning methods is that the former can present features from the original feature set automatically, instead of selecting feature manually. Two types of optimization methods of DBN are the adjustment of the depth structure and the optimization of the related parameters [16, 17]. The supervised learning algorithm of DBN is based on the backpropagation algorithm with a gradient; as a result, the weight adjustment can easily fall into a local optimum and slow the learning speed, thereby affecting the classification accuracy. Determining how to select the optimal weights and avoid the gradient of the fine adjustment method is the key to improve the classification accuracy. To overcome these difficulties, it is necessary to find a learning algorithm without the gradient. Independent component regression (ICR) is a layer-to-layer supervised parameter regression model without the gradient [18, 19]. Therefore, a weight optimization method based on is proposed, to avoid the local optimum brought by the gradient algorithm, and the classification accuracy of DBN is further improved.

The rest of this paper is organized as follows. Section 2 introduces a method of aperture identification. In Section 3, the experimental results for pipeline are analyzed and discussed. Finally, Section 4 draws the main conclusions.

2. Leak Apertures Identification Method

In this section, WPA and are first introduced, and then, a combination of WPA and is proposed for different leakage apertures identification of a pipeline. Afterwards, an illustration is provided of the aperture identification process based on the time domain sound velocity of ultrasonic signal through WPA and .

2.1. Principle of Wavelet Packet Analysis-Based Denoising

WPA [2022] is an effective time-frequency analysis technique for nonstationary signals, multiresolution analysis of the wavelet packet transform decomposes signals into low frequency signals and high-frequency signals. Furthermore, the process continues to decompose the following layers until reaching the preset level; as a result, the wavelet packet analysis has obviously better ability to achieve accurate local analysis. Moreover, wavelet packet analysis has better characteristics for further segmentation and refinement of the frequency band broadened with the increase of scale. Therefore, WPA is a precise analysis method with the characteristics of broad high-frequency bandwidth and narrow low frequency bandwidth. When the signal is decomposed by wavelet packets, a variety of wavelet basis functions can be adopted. For example, the signal is decomposed by wavelet packets using three-layer decomposition [23], as shown in Figure 1.

Figure 1: Wavelet packet decomposition of the signal S.

The signal is decomposed by the wavelet packet decomposition tree and can be represented as

2.2. Deep Belief Networks with Independent Component Regression

First, the independent component regression (ICR) is presented. Next, DBN with ICR and its learning algorithm are presented.

2.2.1. Independent Component Regression

Independent component analysis (ICA) [24] is a statistical and computational technique for revealing the hidden factors of signals; the goal of ICA is to decompose the observed data linearly into a statistical independent component. Denote the process observation samples as ; the samples include independent variables and dependent variable . After the independent components have been estimated from ICA, the ICR model can be obtained.

According to the ICR algorithm, it is assumed that the measured process variables can be expressed as linear combinations of unknown independent components ; the ICR model is given by

where is the mixing matrix and is the residual vector.

Thus, the independent components can be estimated as

where is the separating matrix.

Thus, the linear regression matrix can be expressed as shown:

Therefore, the desired model between and can be obtained by the ICR model, which is given as

where .

2.2.2. Deep Belief networks with Independent Component Regression Learning Process

Please refer to the drawing [2527] for more information on DBN. ICR not only is a method of regression, but is also regarded as a supervised learning algorithm. DBN with ICR supervised fine-tuning starts from the classifier layer, and an alternative ICR method is repeatedly used to model every two hidden layers from the top layer to the bottom layer. Therefore, the ICR algorithm can be used to replace the gradient-based supervised learning; the goal of is to overcome the low-accuracy and time-consuming nature of DBN.

For the classifier layer and the last hidden layer, the detailed training steps are as follows:

(a) First, the state of the last hidden layer is extracted as the independent variable, and the classifier layer is used as the dependent variable.

It is assumed that the classifier matrix is dimensional; the last hidden layer state matrix is dimension; the sample observation matrices are as follows:

where is from the samples and is the last hidden layer state matrix, obtained by the hidden layer of the last RBM.

(b) Independent components are extracted from the observation sample matrix .

(c) The linear regression can be performed on and the output matrix .

(d) Therefore, the ICR model can be expressed by

where the output weight matrix optimized by the ICR model is described by

Note that the number of the independent components is very important for the accuracy of ICR.

where is determined by minimizing the absolute error of prediction; the corresponding is optimal.

Thus, the ICR fine-tuning process is completed between the output layer and the last hidden layer. Next, ICR is repeatedly conducted in every two hidden layers, starting form and to and . As a result, the remaining weight matrices optimized by ICR are .

The contrastive divergence (CD) algorithm is used to train each RBM from down layer to top layer first, and the fixed weights (initialization weights) after unsupervised training are determined. Next, the actual output is used to build the layer-layer ICR model and fine tune the initialization weights layer by layer; in the process of establishing the ICR model, the independent variable comes from the state variable of the RBM after the unsupervised training is completed.

The architecture of the algorithm is shown in Figure 2.

Figure 2: Architecture of .
2.3. Convergence Analysis

For the proposed , weight parameters are crucial to the convergence of . Thus, a theoretical proof for the convergence of is described in this section. According to the learning process of , the whole dynamics transmission error can be described in [28]

where , and and are the ideal deep belief network architecture function and the obtained deep belief network architecture function of for the training samples, respectively. is the final weight derived from , and is the ideal weight of for the training samples.

According to [29], assuming that

where is the Euclidean distance, . Because is bounded, according to the architectural of deep belief networks, is also bounded. Therefore, this assumption is achievable.

Theorem 1. Considering a stable system by (13) and (14), if is used to recognize the different leak apertures, then the classification error can converge to a finite vector: ( is a small positive number). Moreover, with the increase of samples and time, is uniformly and ultimately bounded, and it approaches 0 when the training samples are adequate.

Proof. Given the Lyapunov functionThe derivative of is described as To further analyze (16), it is discussed according to the following two cases:
If , then , and is negative semidefinite.
If , then , and is positive definite. However, as time goes on or the number of samples increase, there must be two types of situations:
or ; both situations are just same as that of item .
Thus, is negative semidefinite.
The training error sum is expressed byAccording to the Lyapunov stability theorem, where indicates that the number of the training samples is enough large and that all the training samples have been put into .
Therefore, the convergence of with respect to weight parameters is guaranteed theoretically.

2.4. Leak Apertures Recognition Based on WPA and

It has been shown that a deep belief network can achieve lower error rate compared to traditional methods in fault patterns classification [30]. Hence, using the advantages of both WPA and , we propose a hybrid leakage aperture identification method. The sound velocity of ultrasonic signals can be denoised using WPA. These signals corresponding to different leakage apertures may be different; however, it is hard to differentiate the different apertures through pattern recognition without feature extraction based on prior knowledge.

Therefore, is applied to identify the different leakage apertures via signals after WPA denosing. The schematic of the proposed method is shown in Figure 3.

Figure 3: Schematic of the method to recognize pipeline leakage apertures.

3. Field Experiment and Analysis

Acquired sound velocities of the ultrasonic signals are first processed by WPA and then classified by . In addition, a comparative study between the proposed and existing pipeline leakage aperture methods is performed.

3.1. Field Experiment

Because of safety and cost problems, water instead of oil was used to simulate pipeline leaks in the experiment. According to the leakage experimental protocols for examining the aperture of different sizes, as shown in Figure 4, the sound velocities of upstream or downstream ultrasonic signals were chosen to perform the test. The length of the pipeline segment is 2,800 m with an inner diameter of 50 mm, and the leakage apertures are 4 mm, 6 mm, 10 mm, and 15 mm. The operation conditions were as follows: (a) water was transported at 12 m3/h; (b) different apertures of valves were installed to emulate leaks; (c) the leakage flow and the energy were added by the upstream pump (the lift of the pump was 120 m); (d) leaked water was stored by a tank. The experiment apparatus is shown in Figure 5. The WPA and algorithms are tested in MATLAB environment. We acquired a database from ultrasonic equipment by National Instruments DAQ-9184, at a sampling rate of 100 Hz. All the methods are implemented in MATLAB R2014a on a PC with an Intel Pentium processor (2.90 GHz) and 6 GB RAM.

Figure 4: Different leak apertures.
Figure 5: Experiment apparatus.
3.2. Sound Velocity of Ultrasonic Processing and Aperture Recognition

The sound velocities of the ultrasonic signals from different apertures were acquired by an ultrasonic sensor at the end of the pipeline; these velocities were used as the database to verify the proposed algorithm. The acquired signals are shown in Figure 6.

Figure 6: Ultrasonic signals under different leakage apertures.

According to the measured sound velocities of the ultrasonic signals, they are decomposed by WPA of best tree with db3 in the three-layer model. Using the best tree structure, the sound velocity of the ultrasonic signal is reconstructed. To validate denoised signals, which are reconstructed signals obtained via the wavelet packet, the results of the signal denoised with VMD, EMD, and LMD are as shown in Figure 7.

Figure 7: Pipeline inlet sound velocities of the ultrasonic signals of the 4 mm leakage aperture.

Figure 7 shows that the wavelet packet analysis presents a powerful ability for denoising the signal from the measured sound velocities of the ultrasonic signal, compared with other methods, such as EMD, LMD, and VMD. When the measured sound velocities of the ultrasonic signals are reconstructed with EMD, because of the influence of end effect, it is very difficult to accurately remove the noise. The LMD method is more relaxed than EMD method in terms of decomposition conditions as the end effect is reduced and the over envelope phenomenon is avoided in the reconstruction process. However, the end effect still affects the extraction for removing the noise. VMD can adaptively extract the intrinsic modes of original sound velocities of the ultrasonic signal, but the end effect also affects the reconstruction signal.

Therefore, WPA is used to denoise the sound velocities of the ultrasonic signals at the end of the pipeline; the sound velocities of the ultrasonic signals of normal condition and for leakage apertures of 4 mm, 6 mm, 10 mm, and 15 mm are collected by ultrasonic equipment. Five cases are created, and 1,000 samples from each case are chosen for training the and the DBN. Thus, 5,000 samples of sound velocities of the ultrasonic signals are selected. Moreover, 4,000 samples are chosen as training samples, and the others are used as the testing samples.

In this simulation, the denoised sound velocities of the ultrasonic signals are the inputs of the and DBN, and the five feature vectors (absolute mean value, effective value, kurtosis, plus factor, and peak value factor) are input into the least squares twin support vector machine (LSTSVM) [31], the least squares support vector machine (LSSVM), the support vector machine (SVM), and the back propagation neural network (BPNN). The feature vectors of 4,000 groups (800 of each condition) are chosen as training samples and the others are used as testing samples. This paper adopts the “One-against-All” algorithm (OAA) of multiclassification.

We follow the experience in the architecture selection of DBN: the number of neurons of the next layer is smaller than that of the previous layer so that the process of the DBN can be a feature extraction process. In this study, the three hidden layers of are selected by simulation. Moreover, to determine the number of neurons of the hidden layer, a trail-and-error approach is chosen to demonstrate the relation between the number of neurons in the hidden layer and the classification error. The relation curve between the number of neurons in the hidden layer and the classification error is shown in Figure 8. Figure 8 reveals that the best number of neurons in hidden layer 3 is 65, and the corresponding root mean square error (RMSE) is 0.01; the RMSE is given as

Figure 8: Relation curve between the number of neurons in the hidden layer and the classification error.

where is the desired value, is the output value of , and is the number of testing samples.

According to the above analysis, the architecture of and and DBN is selected as 400-200-100-65-5. To improve the accuracy of fine-tuning, the number of independent components must be set as ; the trail-and-error approach is used to determine the optimal number of independent components, and the relation curve between number of independent components and the classification error is shown in Figure 9.

Figure 9: Relation curve between the number of independent components and the classification error.

Figure 9 reveals that the best numbers of independent components are , and the corresponding classifier error is 0.01. Thus, the is constructed with the optimal number of independent components. After unsupervised learning and supervised learning, we use to recognize different leakage apertures. To efficiently demonstrate the proposed method of , the input and output for DBN are the same as those for ; moreover, 4,000 samples results using LSTSVM, LSSVM, SVM, and BPNN are obtained for comparison with the simulation. Practically, DBN is the traditional DBN.

The DBN’s weights are initialized randomly, the biases are initialized to zero, the maximum iterations are 1, the batch size is 1, and the learning rate is 0.4. In LSTSVM, we set the slack variable to 0.01, and the kernel parameter to 1. The OAA method is used to accomplish multi- classification. We employ LS-SVMlab to implement the multiclassifier of LSSVM, where the slack variable is 1 and the kernel parameter is 1. We also employ Libsvm to implement the multiclassifier of SVM, where the slack variable is 10 and the kernel parameter is 0.1. This paper chooses a three-layer BPNN for which the middle layer node number is 100, using 100 iterations, the learning rate of 0.5, and the minimum error of . A comparison of the testing results with LSTSVM, LSSVM, SVM and BPNN is shown in Table 1.

Table 1: Simulation results of different methods in different leakage apertures.

The experimental results are listed in Table 1. Table 1 reveals that has the best results in terms of average training time. Note that, although the accuracy of BPNN is as good as that of DBN, the time domain signals of sound velocities of the ultrasonic signals without feature selection by experience were input into DBN and . Table 1 also reveals that the LSTSVM improves the accuracy and the average running time than LSSVM and SVM; the accuracy of LSSVM is the same as that of SVM, but the average running time of LSSVM is less than SVM. Compared with DBN, has improved in recognition accuracy of different leakage apertures and program running time, primarily because of the fine-tuning of ICR of .

3.3. Results and Analysis

The classification rate is calculated as the ratio of the number of correctly classified test samples to the total number of test samples. The proposed method and the LSTSVM-based, LSSVM-based, SVM-based, and BPNN-based methods are used to identify the different leakage apertures. 5,000 trials are performed, where 80% of samples are randomly selected for training and other samples are used for testing.

In the simulation with the proposed method, the average accuracies of testing are 98.98%; i.e., all the different apertures are effectively recognized. Using the LSTSVM-based, LSSVM-based, SVM-based, and BPNN-based methods, however, the average testing accuracy is 98.58%, 98.42%, 98.1%, and 98.91%, respectively. This result implies that the proposed method obtains higher recognition accuracies and shows better robustness than the other methods in distinguishing the different apertures.

DBN can process the sound velocities of ultrasonic signals of a pipeline in the time domain to recognize the different leakage apertures directly, without feature extraction and feature selection by prior knowledge. Thus, the intelligence of leak detection and leak aperture recognition is enhanced.

In this paper, we have not studied the leak location, and the architecture selection of DBN. At present, the architecture of deep learning networks including the number of hidden layers and the number of neurons in each hidden layer is selected through empirical or experimental methods. Thus, this approach requires much work and may affect the accuracy or speed by architecture selection. The architecture adaptive selection is still a difficult problem to solve for deep neural networks; we will study this problem in the future.

4. Conclusions

In this paper, the method for leak aperture recognition of pipeline based on WPA and was proposed. To effectively extract more valuable leakage information, WPA was applied to refine the measured sound velocity of the ultrasonic signal to design an original set. To achieve the desirable performance of leak aperture recognition and remove the requirement for manual feature selection, was used as classifier. To investigate the effectiveness of the proposed method, it was tested on sound velocity of the ultrasonic data of an experimental pipeline to recognize the different leak apertures. The results showed that the proposed method can reliably recognize the different leakage apertures.

Data Availability

Because all data of the experiment will be used as the patent application of the project, we cannot share the experimental data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61673199.

References

  1. S. Datta and S. Sarkar, “A review on different pipeline fault detection methods,” Journal of Loss Prevention in the Process Industries, vol. 41, pp. 97–106, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. G. Bolzon, T. Boukharouba, G. Gabetta, M. Elboujdaini, and M. Mellas, Integrity of pipelines transporting hydrocarbons, Springer, The Netherlands, 2011.
  3. M. Henrie, P. Carpente, and R. E. Nicholas, Pipeline Leak Detection Handbook, Gulf Professional Publishing, 2016.
  4. A. Martini, M. Troncossi, and A. Rivola, “Automatic Leak Detection in Buried Plastic Pipes of Water Supply Networks by Means of Vibration Measurements,” Shock and Vibration, vol. 2015, Article ID 165304, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. J. Liu, H. Su, Y. Ma, G. Wang, Y. Wang, and K. Zhang, “Chaos characteristics and least squares support vector machines based online pipeline small leakages detection,” Chaos, Solitons & Fractals, vol. 91, pp. 656–669, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Sun, Q. Xiao, J. Wen, and F. Wang, “Natural gas pipeline small leakage feature extraction and recognition based on LMD envelope spectrum entropy and SVM,” Measurement, vol. 55, no. 9, pp. 434–443, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. D. Wang, Z. Song, Y. Wu, and Y. Jiang, “Ultrasonic wave based pressure measurement in small diameter pipeline,” Ultrasonics, vol. 63, pp. 1–6, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Hu, L. Zhang, and W. Liang, “Detection of small leakage from long transportation pipeline with complex noise,” Journal of Loss Prevention in the Process Industries, vol. 24, no. 4, pp. 449–457, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. C. Guo, Y. Wen, P. Li, and J. Wen, “Adaptive noise cancellation based on EMD in water-supply pipeline leak detection,” Measurement, vol. 79, pp. 188–197, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Sun, Q. Xiao, J. Wen, and Y. Zhang, “Natural gas pipeline leak aperture identification and location based on local mean decomposition analysis,” Measurement, vol. 79, pp. 147–157, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. Z. Li, J. Chen, Y. Zi, and J. Pan, “Independence-oriented VMD to identify fault feature for wheel set bearing fault diagnosis of high speed locomotive,” Mechanical Systems and Signal Processing, vol. 85, pp. 512–529, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. K. Dragomiretskiy and D. Zosso, “Variational mode decomposition,” IEEE Transactions on Signal Processing, vol. 62, no. 3, pp. 531–544, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. J. Li, X. Fan, G. Chen, Z. Gao, M. Chen, and L. Li, “A DNN for small leakage detection of positive pressure gas pipelines in the semiconductor manufacturing,” in Proceedings of the 2016 IEEE International Conference of Online Analysis and Computing Science, ICOACS 2016, pp. 384–388, China, May 2016. View at Scopus
  14. Z. Zhang and J. Zhao, “A deep belief network based fault diagnosis model for complex chemical processes,” Computers & Chemical Engineering, vol. 107, pp. 395–407, 2017. View at Publisher · View at Google Scholar
  15. H. Shao, H. Jiang, H. Zhang, and T. Liang, “Electric Locomotive Bearing Fault Diagnosis Using a Novel Convolutional Deep Belief Network,” IEEE Transactions on Industrial Electronics, vol. 65, no. 3, pp. 2727–2736, 2017. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Kamada and T. Ichimura, “An adaptive learning method of Deep Belief Network by layer generation algorithm,” in Proceedings of the 2016 IEEE Region 10 Conference, TENCON 2016, pp. 2967–2970, Singapore, November 2016. View at Scopus
  17. T. Ichimura and S. Kamada, “Adaptive learning method of recurrent temporal deep belief network to analyze time series data,” in Proceedings of the 2017 International Joint Conference on Neural Networks, IJCNN 2017, pp. 2346–2353, USA, May 2017. View at Scopus
  18. Y. Zhang and Y. Zhang, “Optimized independent components for parameter regression,” Chemometrics and Intelligent Laboratory Systems, vol. 104, no. 2, pp. 214–222, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. Z. Ge, Z. Song, and P. Wang, “Probabilistic combination of local independent component regression model for multimode quality prediction in chemical processes,” Chemical Engineering Research and Design, vol. 92, no. 3, pp. 509–521, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. T. L. T. Da Silveira, A. J. Kozakevicius, and C. R. Rodrigues, “Automated drowsiness detection through wavelet packet analysis of a single EEG channel,” Expert Systems with Applications, vol. 55, pp. 559–565, 2016. View at Publisher · View at Google Scholar · View at Scopus
  21. D. Lei, L. Yang, W. Xu, P. Zhang, and Z. Huang, “Experimental study on alarming of concrete micro-crack initiation based on wavelet packet analysis,” Construction and Building Materials, vol. 149, pp. 716–723, 2017. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Li, X. Cui, H. Song, Z. Li, and J. Liu, “Threshold selection method for UWB TOA estimation based on wavelet decomposition and kurtosis analysis,” EURASIP Journal on Wireless Communications and Networking, vol. 2017, no. 1, 2017. View at Google Scholar · View at Scopus
  23. X. Lang, P. Li, Y. Li, and H. Ren, “Leak Location of Pipeline with Multibranch Based on a Cyber-Physical System,” Information, vol. 8, no. 4, p. 113, 2017. View at Publisher · View at Google Scholar
  24. C. Tong, T. Lan, and X. Shi, “Soft sensing of non-Gaussian processes using ensemble modified independent component regression,” Chemometrics and Intelligent Laboratory Systems, vol. 157, pp. 120–126, 2016. View at Publisher · View at Google Scholar · View at Scopus
  25. W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” Neurocomputing, vol. 234, pp. 11–26, 2017. View at Publisher · View at Google Scholar · View at Scopus
  26. L. Nie, D. Jiang, S. Yu, and H. Song, “Network traffic prediction based on deep belief network in wireless mesh backbone networks,” in Proceedings of the 2017 IEEE Wireless Communications and Networking Conference, WCNC 2017, USA, March 2017. View at Scopus
  27. S. Jeschke, C. Brecher, H. Song, and D. B. Rawat, Industrial internet of things cybermanufacturing systems, Springer International Publishing Switzerland, 2017.
  28. H. Han and J. Qiao, “A self-organizing fuzzy neural network based on a growing-and-pruning algorithm,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 6, pp. 1129–1143, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Li, S. Tong, and T. Li, “Observer-based adaptive fuzzy tracking control of MIMO stochastic nonlinear systems with unknown control directions and unknown dead zones,” IEEE Transactions on Fuzzy Systems, vol. 23, no. 4, pp. 1228–1241, 2015. View at Publisher · View at Google Scholar · View at Scopus
  30. L. Zhang, H. Gao, J. Wen, S. Li, and Q. Liu, “A deep learning-based recognition method for degradation monitoring of ball screw with multi-sensor data fusion,” Microelectronics Reliability, vol. 75, pp. 215–222, 2017. View at Publisher · View at Google Scholar · View at Scopus
  31. X. Lang, P. Li, Z. Hu, H. Ren, and Y. Li, “Leak Detection and Location of Pipelines Based on LMD and Least Squares Twin Support Vector Machine,” IEEE Access, vol. 5, pp. 8659–8668, 2017. View at Publisher · View at Google Scholar · View at Scopus