Abstract

The leakage aperture cannot be easily identified, when an oil pipeline has small leaks. To address this issue, a leak aperture recognition method based on wavelet packet analysis (WPA) and a deep belief network (DBN) with independent component regression (ICR) is proposed. WPA is used to remove the noise in the collected sound velocity of the ultrasonic signal. Next, the denoised sound velocity of the ultrasonic signal is input into the deep belief network with independent component regression () to recognize different leak apertures. Because the optimization of the weights of the DBN with the gradient leads to a local optimum and a slow learning rate, ICR is used to replace the gradient fine-tuning method in conventional DBN for improving the classification accuracy, and a Lyapunov function is constructed to prove the convergence of the learning process. By analyzing the acquired ultrasonic sound velocity of different leak apertures, the results show that the proposed method can quickly and effectively identify different leakage apertures.

1. Introduction

Because of aging pipelines, corrosion, or welding defects, small leaks and slow leaks occur frequently; such leaks represent risks to the environment and can cause financial losses [14]. The pressure drop produced by a small leakage is low and difficult to detect; however, small leak and slow leak are main forms of leakage of long distance oil pipelines during the service period. Determining how to identify the small leakage aperture of the pipeline in time has become a popular topic of study in the management of the integrity of a pipeline [5, 6]. Thus, it is important to estimate the leakage aperture, to assist in the development of the pipeline repair plan and in the evaluation of the leakage area.

In a Kneser liquid, the transmission speed of an ultrasonic wave changes as the liquid pressure changes at a definite temperature [7]. In this study, the sound velocity of an ultrasonic signal of a pipeline is used to identify different leakage apertures. In practical engineering works, the general signals collected by data acquisition equipment include noises in the external environment, and the purpose of signal denoising is to distinguish the high-frequency signals from the interference caused by the high-frequency noise and to remove the high-frequency interference signal, as well as retain the useful information in the signal. The weak signal must be guaranteed to not be filtered out in the process of signal analysis, whereas the original signal can be reduced as much as possible. Wavelet packet analysis (WPA) presents a powerful ability for denoising a signal from the measured signals [8]. However, this method requires selection of a suitable wavelet basis function to achieve better denoising effect. When the measured signals are decomposed using empirical mode decomposition (EMD), because of the influence of the end effect, it is very difficult to accurately reconstruct the signal [9]. The local mean decomposition (LMD) method is more relaxed than EMD in terms of decomposition conditions, as the end effect is reduced and the over envelope phenomenon is avoided in the decomposition process [6, 10]. However, the end effect still affects the signal reconstruction. Recently, variable mode decomposition (VMD) has been proposed as an adaptive signal decomposition method, which is an entirely nonrecursive signal decomposition method. VMD not only has a good separation effect of signal and noise, but can also effectively suppress modal aliasing; nevertheless, the end effect also affects the signal reconstruction [11, 12].

Theoretically, a deep belief network (DBN) [13] is composed of multiple restricted Boltzmann machines (RBMs), in contrast with a shallow learning model, such as ANN and SVM. Recently, the DBN is becoming a useful tool for classification [14, 15]. The most significant difference between deep learning methods and shallow learning methods is that the former can present features from the original feature set automatically, instead of selecting feature manually. Two types of optimization methods of DBN are the adjustment of the depth structure and the optimization of the related parameters [16, 17]. The supervised learning algorithm of DBN is based on the backpropagation algorithm with a gradient; as a result, the weight adjustment can easily fall into a local optimum and slow the learning speed, thereby affecting the classification accuracy. Determining how to select the optimal weights and avoid the gradient of the fine adjustment method is the key to improve the classification accuracy. To overcome these difficulties, it is necessary to find a learning algorithm without the gradient. Independent component regression (ICR) is a layer-to-layer supervised parameter regression model without the gradient [18, 19]. Therefore, a weight optimization method based on is proposed, to avoid the local optimum brought by the gradient algorithm, and the classification accuracy of DBN is further improved.

The rest of this paper is organized as follows. Section 2 introduces a method of aperture identification. In Section 3, the experimental results for pipeline are analyzed and discussed. Finally, Section 4 draws the main conclusions.

2. Leak Apertures Identification Method

In this section, WPA and are first introduced, and then, a combination of WPA and is proposed for different leakage apertures identification of a pipeline. Afterwards, an illustration is provided of the aperture identification process based on the time domain sound velocity of ultrasonic signal through WPA and .

2.1. Principle of Wavelet Packet Analysis-Based Denoising

WPA [2022] is an effective time-frequency analysis technique for nonstationary signals, multiresolution analysis of the wavelet packet transform decomposes signals into low frequency signals and high-frequency signals. Furthermore, the process continues to decompose the following layers until reaching the preset level; as a result, the wavelet packet analysis has obviously better ability to achieve accurate local analysis. Moreover, wavelet packet analysis has better characteristics for further segmentation and refinement of the frequency band broadened with the increase of scale. Therefore, WPA is a precise analysis method with the characteristics of broad high-frequency bandwidth and narrow low frequency bandwidth. When the signal is decomposed by wavelet packets, a variety of wavelet basis functions can be adopted. For example, the signal is decomposed by wavelet packets using three-layer decomposition [23], as shown in Figure 1.

The signal is decomposed by the wavelet packet decomposition tree and can be represented as

2.2. Deep Belief Networks with Independent Component Regression

First, the independent component regression (ICR) is presented. Next, DBN with ICR and its learning algorithm are presented.

2.2.1. Independent Component Regression

Independent component analysis (ICA) [24] is a statistical and computational technique for revealing the hidden factors of signals; the goal of ICA is to decompose the observed data linearly into a statistical independent component. Denote the process observation samples as ; the samples include independent variables and dependent variable . After the independent components have been estimated from ICA, the ICR model can be obtained.

According to the ICR algorithm, it is assumed that the measured process variables can be expressed as linear combinations of unknown independent components ; the ICR model is given by

where is the mixing matrix and is the residual vector.

Thus, the independent components can be estimated as

where is the separating matrix.

Thus, the linear regression matrix can be expressed as shown:

Therefore, the desired model between and can be obtained by the ICR model, which is given as

where .

2.2.2. Deep Belief networks with Independent Component Regression Learning Process

Please refer to the drawing [2527] for more information on DBN. ICR not only is a method of regression, but is also regarded as a supervised learning algorithm. DBN with ICR supervised fine-tuning starts from the classifier layer, and an alternative ICR method is repeatedly used to model every two hidden layers from the top layer to the bottom layer. Therefore, the ICR algorithm can be used to replace the gradient-based supervised learning; the goal of is to overcome the low-accuracy and time-consuming nature of DBN.

For the classifier layer and the last hidden layer, the detailed training steps are as follows:

(a) First, the state of the last hidden layer is extracted as the independent variable, and the classifier layer is used as the dependent variable.

It is assumed that the classifier matrix is dimensional; the last hidden layer state matrix is dimension; the sample observation matrices are as follows:

where is from the samples and is the last hidden layer state matrix, obtained by the hidden layer of the last RBM.

(b) Independent components are extracted from the observation sample matrix .

(c) The linear regression can be performed on and the output matrix .

(d) Therefore, the ICR model can be expressed by

where the output weight matrix optimized by the ICR model is described by

Note that the number of the independent components is very important for the accuracy of ICR.

where is determined by minimizing the absolute error of prediction; the corresponding is optimal.

Thus, the ICR fine-tuning process is completed between the output layer and the last hidden layer. Next, ICR is repeatedly conducted in every two hidden layers, starting form and to and . As a result, the remaining weight matrices optimized by ICR are .

The contrastive divergence (CD) algorithm is used to train each RBM from down layer to top layer first, and the fixed weights (initialization weights) after unsupervised training are determined. Next, the actual output is used to build the layer-layer ICR model and fine tune the initialization weights layer by layer; in the process of establishing the ICR model, the independent variable comes from the state variable of the RBM after the unsupervised training is completed.

The architecture of the algorithm is shown in Figure 2.

2.3. Convergence Analysis

For the proposed , weight parameters are crucial to the convergence of . Thus, a theoretical proof for the convergence of is described in this section. According to the learning process of , the whole dynamics transmission error can be described in [28]

where , and and are the ideal deep belief network architecture function and the obtained deep belief network architecture function of for the training samples, respectively. is the final weight derived from , and is the ideal weight of for the training samples.

According to [29], assuming that

where is the Euclidean distance, . Because is bounded, according to the architectural of deep belief networks, is also bounded. Therefore, this assumption is achievable.

Theorem 1. Considering a stable system by (13) and (14), if is used to recognize the different leak apertures, then the classification error can converge to a finite vector: ( is a small positive number). Moreover, with the increase of samples and time, is uniformly and ultimately bounded, and it approaches 0 when the training samples are adequate.

Proof. Given the Lyapunov functionThe derivative of is described as To further analyze (16), it is discussed according to the following two cases:
If , then , and is negative semidefinite.
If , then , and is positive definite. However, as time goes on or the number of samples increase, there must be two types of situations:
or ; both situations are just same as that of item .
Thus, is negative semidefinite.
The training error sum is expressed byAccording to the Lyapunov stability theorem, where indicates that the number of the training samples is enough large and that all the training samples have been put into .
Therefore, the convergence of with respect to weight parameters is guaranteed theoretically.

2.4. Leak Apertures Recognition Based on WPA and

It has been shown that a deep belief network can achieve lower error rate compared to traditional methods in fault patterns classification [30]. Hence, using the advantages of both WPA and , we propose a hybrid leakage aperture identification method. The sound velocity of ultrasonic signals can be denoised using WPA. These signals corresponding to different leakage apertures may be different; however, it is hard to differentiate the different apertures through pattern recognition without feature extraction based on prior knowledge.

Therefore, is applied to identify the different leakage apertures via signals after WPA denosing. The schematic of the proposed method is shown in Figure 3.

3. Field Experiment and Analysis

Acquired sound velocities of the ultrasonic signals are first processed by WPA and then classified by . In addition, a comparative study between the proposed and existing pipeline leakage aperture methods is performed.

3.1. Field Experiment

Because of safety and cost problems, water instead of oil was used to simulate pipeline leaks in the experiment. According to the leakage experimental protocols for examining the aperture of different sizes, as shown in Figure 4, the sound velocities of upstream or downstream ultrasonic signals were chosen to perform the test. The length of the pipeline segment is 2,800 m with an inner diameter of 50 mm, and the leakage apertures are 4 mm, 6 mm, 10 mm, and 15 mm. The operation conditions were as follows: (a) water was transported at 12 m3/h; (b) different apertures of valves were installed to emulate leaks; (c) the leakage flow and the energy were added by the upstream pump (the lift of the pump was 120 m); (d) leaked water was stored by a tank. The experiment apparatus is shown in Figure 5. The WPA and algorithms are tested in MATLAB environment. We acquired a database from ultrasonic equipment by National Instruments DAQ-9184, at a sampling rate of 100 Hz. All the methods are implemented in MATLAB R2014a on a PC with an Intel Pentium processor (2.90 GHz) and 6 GB RAM.

3.2. Sound Velocity of Ultrasonic Processing and Aperture Recognition

The sound velocities of the ultrasonic signals from different apertures were acquired by an ultrasonic sensor at the end of the pipeline; these velocities were used as the database to verify the proposed algorithm. The acquired signals are shown in Figure 6.

According to the measured sound velocities of the ultrasonic signals, they are decomposed by WPA of best tree with db3 in the three-layer model. Using the best tree structure, the sound velocity of the ultrasonic signal is reconstructed. To validate denoised signals, which are reconstructed signals obtained via the wavelet packet, the results of the signal denoised with VMD, EMD, and LMD are as shown in Figure 7.

Figure 7 shows that the wavelet packet analysis presents a powerful ability for denoising the signal from the measured sound velocities of the ultrasonic signal, compared with other methods, such as EMD, LMD, and VMD. When the measured sound velocities of the ultrasonic signals are reconstructed with EMD, because of the influence of end effect, it is very difficult to accurately remove the noise. The LMD method is more relaxed than EMD method in terms of decomposition conditions as the end effect is reduced and the over envelope phenomenon is avoided in the reconstruction process. However, the end effect still affects the extraction for removing the noise. VMD can adaptively extract the intrinsic modes of original sound velocities of the ultrasonic signal, but the end effect also affects the reconstruction signal.

Therefore, WPA is used to denoise the sound velocities of the ultrasonic signals at the end of the pipeline; the sound velocities of the ultrasonic signals of normal condition and for leakage apertures of 4 mm, 6 mm, 10 mm, and 15 mm are collected by ultrasonic equipment. Five cases are created, and 1,000 samples from each case are chosen for training the and the DBN. Thus, 5,000 samples of sound velocities of the ultrasonic signals are selected. Moreover, 4,000 samples are chosen as training samples, and the others are used as the testing samples.

In this simulation, the denoised sound velocities of the ultrasonic signals are the inputs of the and DBN, and the five feature vectors (absolute mean value, effective value, kurtosis, plus factor, and peak value factor) are input into the least squares twin support vector machine (LSTSVM) [31], the least squares support vector machine (LSSVM), the support vector machine (SVM), and the back propagation neural network (BPNN). The feature vectors of 4,000 groups (800 of each condition) are chosen as training samples and the others are used as testing samples. This paper adopts the “One-against-All” algorithm (OAA) of multiclassification.

We follow the experience in the architecture selection of DBN: the number of neurons of the next layer is smaller than that of the previous layer so that the process of the DBN can be a feature extraction process. In this study, the three hidden layers of are selected by simulation. Moreover, to determine the number of neurons of the hidden layer, a trail-and-error approach is chosen to demonstrate the relation between the number of neurons in the hidden layer and the classification error. The relation curve between the number of neurons in the hidden layer and the classification error is shown in Figure 8. Figure 8 reveals that the best number of neurons in hidden layer 3 is 65, and the corresponding root mean square error (RMSE) is 0.01; the RMSE is given as

where is the desired value, is the output value of , and is the number of testing samples.

According to the above analysis, the architecture of and and DBN is selected as 400-200-100-65-5. To improve the accuracy of fine-tuning, the number of independent components must be set as ; the trail-and-error approach is used to determine the optimal number of independent components, and the relation curve between number of independent components and the classification error is shown in Figure 9.

Figure 9 reveals that the best numbers of independent components are , and the corresponding classifier error is 0.01. Thus, the is constructed with the optimal number of independent components. After unsupervised learning and supervised learning, we use to recognize different leakage apertures. To efficiently demonstrate the proposed method of , the input and output for DBN are the same as those for ; moreover, 4,000 samples results using LSTSVM, LSSVM, SVM, and BPNN are obtained for comparison with the simulation. Practically, DBN is the traditional DBN.

The DBN’s weights are initialized randomly, the biases are initialized to zero, the maximum iterations are 1, the batch size is 1, and the learning rate is 0.4. In LSTSVM, we set the slack variable to 0.01, and the kernel parameter to 1. The OAA method is used to accomplish multi- classification. We employ LS-SVMlab to implement the multiclassifier of LSSVM, where the slack variable is 1 and the kernel parameter is 1. We also employ Libsvm to implement the multiclassifier of SVM, where the slack variable is 10 and the kernel parameter is 0.1. This paper chooses a three-layer BPNN for which the middle layer node number is 100, using 100 iterations, the learning rate of 0.5, and the minimum error of . A comparison of the testing results with LSTSVM, LSSVM, SVM and BPNN is shown in Table 1.

The experimental results are listed in Table 1. Table 1 reveals that has the best results in terms of average training time. Note that, although the accuracy of BPNN is as good as that of DBN, the time domain signals of sound velocities of the ultrasonic signals without feature selection by experience were input into DBN and . Table 1 also reveals that the LSTSVM improves the accuracy and the average running time than LSSVM and SVM; the accuracy of LSSVM is the same as that of SVM, but the average running time of LSSVM is less than SVM. Compared with DBN, has improved in recognition accuracy of different leakage apertures and program running time, primarily because of the fine-tuning of ICR of .

3.3. Results and Analysis

The classification rate is calculated as the ratio of the number of correctly classified test samples to the total number of test samples. The proposed method and the LSTSVM-based, LSSVM-based, SVM-based, and BPNN-based methods are used to identify the different leakage apertures. 5,000 trials are performed, where 80% of samples are randomly selected for training and other samples are used for testing.

In the simulation with the proposed method, the average accuracies of testing are 98.98%; i.e., all the different apertures are effectively recognized. Using the LSTSVM-based, LSSVM-based, SVM-based, and BPNN-based methods, however, the average testing accuracy is 98.58%, 98.42%, 98.1%, and 98.91%, respectively. This result implies that the proposed method obtains higher recognition accuracies and shows better robustness than the other methods in distinguishing the different apertures.

DBN can process the sound velocities of ultrasonic signals of a pipeline in the time domain to recognize the different leakage apertures directly, without feature extraction and feature selection by prior knowledge. Thus, the intelligence of leak detection and leak aperture recognition is enhanced.

In this paper, we have not studied the leak location, and the architecture selection of DBN. At present, the architecture of deep learning networks including the number of hidden layers and the number of neurons in each hidden layer is selected through empirical or experimental methods. Thus, this approach requires much work and may affect the accuracy or speed by architecture selection. The architecture adaptive selection is still a difficult problem to solve for deep neural networks; we will study this problem in the future.

4. Conclusions

In this paper, the method for leak aperture recognition of pipeline based on WPA and was proposed. To effectively extract more valuable leakage information, WPA was applied to refine the measured sound velocity of the ultrasonic signal to design an original set. To achieve the desirable performance of leak aperture recognition and remove the requirement for manual feature selection, was used as classifier. To investigate the effectiveness of the proposed method, it was tested on sound velocity of the ultrasonic data of an experimental pipeline to recognize the different leak apertures. The results showed that the proposed method can reliably recognize the different leakage apertures.

Data Availability

Because all data of the experiment will be used as the patent application of the project, we cannot share the experimental data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61673199.