Abstract

For the problems of missing edges and obvious artifacts in Electrical Capacitance Tomography (ECT) reconstruction algorithms, an image reconstruction method based on a multiscale dual-channel convolutional neural network is proposed. Firstly, the image reconstructed by Landweber algorithm is input into the convolutional neural network, and four scales are selected for feature extraction. Feature unions are used across the scales to fuse the information of the output layer with feature maps. To improve the imaging accuracy, two frequency channels are designed for the input image. The middle layer of the network consists of two fully convolutional structures. Convolutional layers and jump connections are designed separately for different channels, which greatly improves the network’s ability to extract feature information and reduces the number of feature maps required for each layer. The number of network layers is shallow, which can speed up the network training, prevent the network from falling into local optimum, and ensure the effective transmission of image details. Simulation experiments are carried out for four typical dual media distributions. The edges of the reconstructed image are smoother and the image error is smaller. It effectively resolves the lack of edges in the reconstruction image and reduces the image edge artifacts in the ECT system.

1. Introduction

In the 1980s, researchers proposed electrical capacitance tomography (ECT) based on medical image CT, which has been widely used in the industrial field [1]. The principle of this method is to use electrodes to measure the capacitance change of the area, to reconstruct by image reconstruction algorithm, and to use two-dimensional or three-dimensional images for visual output. In the process of electrical capacitance tomography, an image reconstruction algorithm is a key problem, which directly affects the accuracy of imaging [2]. At present, the classical methods of image reconstruction include LBP [3], Landweber iterative method [4, 5], Tikhonov regularization algorithm, singular value decomposition (SVD), algebraic reconstruction technology (ART), extreme learning machine (ELM) [6], and convolutional neural networks (CNN) [7]. LBP algorithm is a qualitative algorithm with a simple algorithm and fast reconstruction speed, but limited imaging accuracy. Landweber iterative algorithm and ART are iterative algorithms, which have effects on improving iterative stability and controlling noise [8]. Compared with LBP and SVD noniterative algorithms, LBP algorithm has some improvement in imaging accuracy, but it has disadvantages such as edge missing and large artifacts in processing complex media distribution. Tikhonov regularization algorithm will lose certain image information, so the quality of the reconstructed image needs to be further improved; the algorithm has fast imaging speed and simple principle, but low imaging accuracy; CNN can extract image edge information better through its unique convolution feature extraction ability, so this paper applies the improved CNN network to ECT image reconstruction on this basis. In [9], an electrical capacitance tomography image reconstruction algorithm based on improved particle swarm optimization combined with Landweber algorithm is proposed to ensure that the algorithm has strong global optimization ability at the beginning stage, and the new algorithm significantly improves the subjective and objective quality of the reconstructed image; in [10], an image reconstruction method based on fast convergent convolutional neural network (fast convergent convolutional neural network, FCCNN) is proposed. The efficiency and quality of image reconstruction are improved; in [11], a fusion method of electrical capacitance tomography reconstruction image based on wavelet transform is proposed; in [12], an ECT Image Reconstruction Algorithm based on SVM decision tree is proposed to predict the phase number of unknown phases and reconstruct the image. The algorithm can realize the self-adaptive phase number of multiphase flow and improve the image reconstruction accuracy; in [13], an ECT Image reconstruction method based on the particle filter is proposed, and it is an effective and accurate method for ECT Image reconstruction, which provides a new way and means for ECT Image Reconstruction Technology.

To solve the problems of edge missing and artifact obvious and extract the characteristic value more comprehensively, this paper proposes an ECT Image Reconstruction Algorithm Based on a multiscale dual-channel convolutional neural network. The network adopts four different convolutional kernels, dual-channel frequency division processing, a jumping connection between scales, speeding up the network training speed, preventing the network from falling into local optimum, and ensuring the graph like the effective transmission of detailed information.

2. The Basic Principle of Electrical Capacitance Tomography

A typical 12-electrode ECT system consists of a sensor system, capacitance data acquisition, and image reconstruction system, as shown in Figure 1 [14]. The capacitance sensor is composed of three parts: insulating pipe, measuring electrode, and grounding shield. The data is collected by the data acquisition system and transferred to a computer for image reconstruction.

The number of typical capacitance plates includes 8 electrodes or 12 electrodes. The capacitance matrix between each plate can be expressed bywhere indicates the electrode pair in the tested area about the capacitance value between and , indicates the section of the area to be tested, represents the relative dielectric constant distribution of the internal medium in the area to be measured, and represents the sensitivity distribution of the section to be measured. After normalization, the following mathematical models can be obtained inwhere is the sensitivity distribution matrix and and represent the normalized dielectric constant distribution vector and capacitance vector, respectively. The process of retrieving the dielectric constant distribution from the known capacitance and sensitive field data is the image reconstruction process.

3. ECT Image Reconstruction Based on Multiscale Dual-Channel Convolutional Neural Network

3.1. The Theory of Convolution Neural Network

The convolutional neural network can find the hidden mapping relationship between data from training data through a large number of learning samples and can extract features [15]. The neural network is a kind of supervised parameter correction algorithm. By training the data many times, the error between the output result of the network and the real result can be calculated, and the error can be reduced finally by correcting the parameters and modifying the model. The process is divided into two main stages: the first stage is network forward propagation, data input to the network, layer by layer transformation, and mapping through the hidden layer to the output layer; the second stage is backpropagation, which further optimizes the parameters using the marked original data. The training process of convolution network is as follows [16].

3.1.1. Forward Propagation

After convoluting the input eigengraph of the previous layer, several different convolution kernels are combined to get the eigengraph of the convolution layer. The characteristic diagram of the layer convolution is shown aswhere previous level feature map representing input aggregate, , represents the offset term of the characteristic graph in the layer, represents the weight of the characteristic graph in the layer, and represents convolution.

According to the principle of regional correlation, the amount of data can be reduced by downsampling, and effective information can be retained at the same time. feature maps in the feature map of the upper layer correspond to feature values of the lower sampling layer. The characteristic figure of the downsampling layer is shown aswhere represents the downsampling function, sum all pixels on the block of the input image, and the output image will shrink n times in 2 dimensions. is the multiplicative bias of the characteristic graph , and is the additive bias of the characteristic graph .

3.1.2. Backpropagation

The weights are updated in the learning process of backpropagation. The neuron weight is given by

The weights of neurons in the downsampling layer are given bywhere is the rate of learning. is the cost function of the training process, which can be described as follows:where is the number of image pixels, N is the number of samples per batch, is the label of the pixel corresponding to the training sample, and is the actual output corresponding to the pixel of the training sample.

For two inputs, if the number of channels is the same and convolution operation is needed, the channel can share the same convolution kernel after calling function. Since the convolution kernel of each output channel is independent and only a single channel is considered, the output channel of the function iswhere is the output quantity of channel characteristic diagram, and are channel characteristic graphs of two inputs respectively, and represents the combining coefficient.

3.2. Convolution Neural Network Model Based on a Multiscale Dual-Channel Convolution Kernel

Based on the deep study of the principle of the convolutional neural network, this paper designs the structure of the multiscale dual-channel convolutional neural network [17], as shown in Figure 2.

3.2.1. Dual-Channel Frequency Division

In order to extract the frequency characteristics corresponding to the pixel value of each image pixel, this paper proposes a mathematical model that transforms the image pixel value into a frequency characteristic and space correspondence and introduces a frequency measurement matrix. In this paper, a receptive field matrix is used to measure the image frequency, and the Gaussian weighted standard deviation of the pixel value in the area is taken as the frequency characteristic of the center point. Compared with the convolution kernel with a weight of 1, the standard deviation of the pixel value in the receptive field area is obtained through Gaussian kernel weighting, which can more highlight the influence of the center point. Fill in the position with no pixel value in the boundary range to calculate the standard deviation of the receptive field, map the whole image to a matrix, and then obtain the corresponding frequency characteristics of the global pixels. According to the frequency measurement matrix, the corresponding frequency characteristics of the global pixels can be obtained. For the convenience of classification, the measurement matrix of the image is normalized to the range of 0∼1, and the data are divided into three regions: 0–0.3, 0.3–0.7, and 0.7–11, corresponding to three areas of low, medium, and high frequency, respectively, of which low frequency is one channel, and mid- and high frequency are another channel. Calculate the marker matrix, and use the marker matrix to divide the original image into images of corresponding frequencies. Corresponding to the image in the background area, the main area, and the detail area, there may be cases where four or more detailed areas will achieve more obvious effects. For this article, we did not do too much research. The relationship between the normalized value and the pixel point is shown in Figure 3.

The expression of the two-dimensional Gaussian kernel function here is shown inwhere x and y are coordinates of space points, are the space center positions, is the standard deviation, and is the weight value corresponding to point.

The weight of Gauss convolution kernel is set as follows:

3.2.2. Jump Connect

Due to a large amount of image reconstruction information in the if and HF channels, the five-layer jump convolution neural network is used, while the low-frequency channel image reconstruction information is less. The four-layer jump convolution neural network is used, as shown in Figure 2. The middle layer of the network consists of two full convolution structures. The convolution structure of the medium-/high-frequency channel includes five convolution layers and four jump connections. The convolution structure of the low-frequency channel includes four convolution layers and three jump connections. The convolution kernels of , , , , and are used in the five-layer convolution of the medium-/high-frequency channel. The convolution kernels of can capture the detail area of the image and have a few parameters to reduce the computational complexity. convolution kernel is used to combine the output of the first layer with the output of the second layer, and the result is used as the input of the third layer; convolution kernel is used to combine the output of the first four layers as the input of the fifth layer, and all the character images are normalized. In the same way, the convolution kernels of , , , and are used in the low-frequency four-layer convolution layer, respectively. Finally, the two-channel results are reconstructed to get image reconstruction results.

This connection method improves the ability of the network to extract feature information, reduces the number of required feature images in each layer, and the number of network layers is relatively shallow, which can speed up network training, prevent the network from falling into local optimum, ensure the effective transmission of image details, and finally get the image reconstruction results.

The multiscale convolution operation is as follows:where represents the group of convolution kernels in the layer, Fsi represents the group of characteristic images output in the layer, and represents the input image.

The detailed parameters of the core part of the network are shown in Tables 1 and 2. Table 1 shows the parameters of medium- and high-frequency channels, and Table 2 shows the parameters of low-frequency channels. The detailed parameters of the core part of the network are shown in Tables 1 and 2. Table 1 shows the parameters of medium- and high-frequency channels, and Table 2 shows the parameters of low-frequency channels. Conv size represents the size of the convolution kernel, padding represents the size of filling boundary, stride represents the step length of convolution kernel moving once, and bias represents bias.

3.2.3. Multiscale Reconstruction

Using multiscale convolution to reconstruct the network can not only extract the features related to the distribution of media but also improve the reconstruction effect between channels [18, 19]. To illustrate the effectiveness of multiscale convolution kernel features and determine the size of convolution kernel, 10 groups of experiments were carried out using core flow, and MSE (mean square error) after 1000 iterations was used as the measurement index to evaluate the network performance, as shown in Table 3.

MSE is defined as follows:where Oi represents the gray value of the pixel in the image distributed with preset media, Pi represents the gray value of the pixel in the reconstructed image, and is the number of pixels in the image. Using the optimizer and backpropagation algorithm to adjust the network parameters to minimize MSE, the weight update process is given bywhere is the number of iterations of the network, is the loss function, is the learning rate, s is the number of layers, is the weight update value at the iteration of the layer, is the weight update value at the iteration of the layer, is the weight at the kth iteration of the layer, and is the weight at the iteration of the layer.

In Experiment 1 of Table 3, the multiscale kernel convolution unit is set as the standard convolution unit with kernel size of ; in experiment 2–6, the two branches of the multiscale kernel convolution unit are designed with the same convolution kernel; in experiment 7–10, the convolution kernel of the two branches is changed. It can be seen from the data of experiments 1–6 that the MSE value of the network using convolution kernel is the smallest; in experiments 2–10, the MSE value of experiment 10 is the smallest, and the image reconstruction effect is the best.

Based on the above analysis, four optimal scales, i.e., , , and are selected, and the MSE calculated by multiscale kernel convolution are smaller than that calculated by standard convolution.

Compared with multilayer single-scale neural network, using the image reconstruction network model with multiscale convolution structure to extract image information can not only automatically extract the local features of the image feature map but also further learn the multiscale information of two-dimensional feature map, and its automatic implicit extraction method also simplifies the complex calculation of image reconstruction; at the same time, building multiscale module can increase the layer, and the information interaction between layers can improve the nonlinear ability of the network, speed up the training speed, and obtain better image reconstruction results [11].

The multiscale convolution network designed in this paper has the following advantages:(1)Increase feature diversity: multiscale convolution feature fusion is an improvement of the convolution network, which solves the problems of network structure which is not wide enough, model parameters are large, and feature extraction scale is single.(2)Improve the ability of feature extraction: through multilayer convolution, the nonlinear ability is improved, the higher dimension feature is automatically extracted, and the feature extraction and fitting ability of the model is improved. For features extracted by small-scale convolution kernel, increasing its scale convolution kernel can enrich information, improve image reconstruction effect, and improve the accuracy of model description [20].(3)Provide cross-layer information flow transmission: each convolution layer in the convolution network has different receptive fields. The image reconstruction network can extract multiple-scale representation vectors. The small convolution kernel has a small receptive field and pays more attention to the details, which is beneficial to the image details; the large convolution kernel expands the node receptive field and pays more attention to the macro and line, which is beneficial to the image overall information reconstruction. Convolution kernels of different scales can complement multiscale information, to effectively improve the information transfer effect between layers [21].

4. Algorithm Implementation Steps

After determining the new neural network model structure of dual-channel frequency division, jump connection, and multiscale reconstruction, data preprocessing and algorithm implementation are needed before the next simulation experiment.

4.1. Pretreatment

This algorithm first uses Landweber algorithm to reconstruct the image and then turns it into a picture as the input of the network. The preprocessing process of converting 10000 groups of capacitance matrix into pictures is as follows:(1)The capacitance values of 10000 groups and the distribution matrix of four typical medium distributions, core flow, laminar flow, circulation, and multidrop flow, are extracted and normalized.(2)According to the optimization theory, the minimum target of ECT Image reconstruction is obtained from equation (1):According to the definition of a vector norm, the objective function can be written as follows:The gradient of is given byAccording to the principle of the steepest descent method, taking the negative gradient direction as the search direction of optimization, the iterative formula of ECT Image reconstruction is as follows:where is the optimal step size. The selection of this parameter is very important, if the value is too large, it is not consistent with the original problem; if the value is too small, it cannot play the effect of regularization, so it is selected by experience in practical application. The selection of iteration step size plays an important role in the iterative process. If the selection is too small, the convergence speed will be slow; if the selection is too large, it will lead to no convergence. In order to reduce the number of iterations and ensure the imaging quality of the iterative method, the least square method is selected to constrain the iteration, and the iteration step size is selected to minimize the sum of squares of errors. The iteration times of core flow, circulation, laminar flow, and multidrop flow are 80, 20, 80, and 250, respectively. Compared with traditional Landweber, the selection of numerical value has higher imaging accuracy and faster convergence speed. As for iteration number and step size, the choice of methods for different models is detailed in document [22]. This paper is limited in length and will not be repeated here.(3)The iterative results are normalized and used as the initial input of the network model.

4.2. Algorithm Implementation

The traditional convolution neural network adopts one input layer, two convolution layers, two secondary sampling layers, one full connection layer, and one output layer. Image reconstruction is realized by single-layer convolution [23]. In this paper, a multiscale dual-channel idea is introduced to improve the network [20, 24]. In the preprocessing process, the initial input image is obtained by Landweber algorithm, and all the input images are divided into frequency, which is input into the multiscale dual-channel convolutional neural network for image reconstruction. The detailed steps are as follows:(1)Landweber algorithm is used to reconstruct the image as the input of the network(2)The input image size is 900 pixels, and each image is divided into three frequency division results: high, medium, and low(3)The images of different channels are convoluted by corresponding network structure, then the output of the two channels is combined by calling the concat function, and a matrix is obtained by full-connection training, which maps 900 pixels of the pipeline, completes the forward propagation process of the convolution neural network, and obtains the forward output result of the network(4)The forward output of the network is compared with the original medium distribution, the error value, and gradient value of each layer of the network which are calculated, the network parameters are modified, and the iteration is repeated until the training error condition is met(5)The trained network model is used for image reconstruction

5. Simulation Experiment

5.1. Dataset

To verify the effectiveness of the proposed algorithm, the 12-electrode ECT system is selected as the research object. The pipeline is divided into 900 pixels, and the dielectric constant of the two-phase flow is set to 1 and 3, respectively. Four typical media, core flow, laminar flow, circulation, and multidrop flow, are selected to distribute 2500 groups of capacitance values, totaling 10000 groups. 8000 groups of samples are randomly selected from the training set, and the test set is another 2000 groups of samples. Each sample is input into the network after preprocessing by Landweber algorithm. Through the Matlab simulation experiment, some data examples are shown in Figure 4. The Landweber algorithm is used to process the generated medium distribution; the result is shown in Figure 5. Figures 5(a)–5(d) are core flow, laminar flow, circulation, and multidrop flow.

5.2. Training Strategy
5.2.1. Training Batch

The number of training samples of deep learning is large, and the parameters are updated in the way of a single sample iteration, which is easy to fall into the local optimum and limits the generalization ability of the model. If the training samples are divided into several batches to update the parameters, the model can converge quickly when the sample size is large. Therefore, the number of samples per batch is very important for the average processing rate of the model. If the number of batch samples is small, the training time of the model increases, and it is easy to fall into the local optimum; if the number of batch samples is large, the optimal direction of the model parameters cannot be guaranteed. In this paper, the number of batch samples is tested, and the average processing rate is introduced:

Since the convergence error of the network during training is 0.22, 0.22 is set as the key indicator of whether the image is reconstructed successfully. The experimental results are shown in Table 4. It can be seen that the best average processing rate and average training time of the training set can be achieved when the number of batch processing is 32.

5.2.2. Optimizer and Learning Rate Test

For different deep learning models and imaging tasks, it is very important to choose the right optimizer and the best learning rate to improve the training speed and processing rate of the model. Considering the network model and image reconstruction tasks constructed in this paper, as well as the influence of different learning rates on the convergence rate, SGD, Adam and Adadelta optimizers [25] were used to carry out 10 experiments, respectively, with the average processing rate of the training set and test set as the evaluation index (unit: %), and the experimental results are shown in Table 5, in which the data marked in bold are the best imaging of each optimizer result.

It can be seen that the experimental results using Adadelta and Adam optimizer are similar, and the best imaging results can be achieved when the learning rate is 0.001, but with the increase of the learning rate, the imaging quality has declined seriously. The SGD optimizer has the opposite effect. When the learning rate is low, the imaging result is poor, and the best effect is achieved when the learning rate is 1. The Adadelta optimizer can give consideration to the highest recognition rate of the training set and test set. Therefore, the Adadelta optimizer is selected and the optimization rate is set to 0.001.

5.3. Other Parameter Settings
5.3.1. Activation Function

In this paper, PReLU (Parametric Rectified Linear Unit) is selected as the activation function of the network structure. Except for the last layer, the end of every convolution layer and anticonvolution layer is connected to this function. The formula can be described as follows:where is the positive interval input signal of layer and is the weight coefficient of the negative interval of the layer .

The final output of the convolution layer is given bywhere is the offset of the layer.

5.3.2. Network Training Parameters

The network learning rate, weight attenuation rate display period, and other settings in the network training parameters are shown in Table 6.

5.4. Experimental Results and Analysis

To evaluate the quality of the reconstructed image, the relative image error and correlation coefficient are used as the evaluation parameters. The correlation coefficient refers to the relationship between the real dielectric constant distribution and the reconstructed image. The smaller the relative error and the larger the correlation coefficient of the reconstructed image, the better the quality of the reconstructed image. The calculation formulas are as equations (21) and (22) and are compared with LBP and Landweber:where e is the dielectric constant distribution of the original image, that is, the gray value of the original image and its average value is recorded as ; is the gray value of the reconstructed image by the algorithm, and its average value is recorded as . The correlation between the reconstructed image and the original image can be calculated.

Based on the four typical models, the imaging area is divided into 900 pixels using a circular grid, and the LBP algorithm, Landweber algorithm, CNN, and the algorithm in this paper are used for simulation, and the results are compared, as shown in Table 7.

The image relative error and correlation coefficient obtained from the experimental results are compared with the previous research results, as shown in Table 8. It can be seen from Table 8 that, for the four selected models, the experimental results show that, compared with the traditional LBP algorithm, Landweber algorithm, and the image reconstruction algorithm improved by many scholars, the algorithm in this paper can display the size, shape, edge, and other information of defects more clearly, and the image relative error is reduced and the image correlation coefficient is improved. Compared with the traditional CNN image reconstruction algorithm, the image correlation coefficient and relative error of the algorithm optimized by this paper are greatly improved.

It can be seen from Figure 6 that the algorithm in this paper and CNN’s relationship between the number of iterations and MSE are compared. Before the 1600 iterations, the MSE value of the algorithm in this paper drops rapidly. After the 1600 iterations, the convergence speed of test data becomes gentle, network performance reaches the best, and the error of test set and training set reaches the lowest, respectively, which are 0.3 and 0.22; CNN conducts the 1800 iterations. After 1800 iterations, the convergence speed of the test data becomes smooth, and when the test set and training set reach the minimum, they are 0.44 and 0.39, respectively.

It can be seen from Figure 6 that the algorithm in this paper and CNN’s relationship between the number of iterations and MSE are compared. Before the 1600 iterations, the MSE value of the algorithm in this paper drops rapidly. After the 1600 iterations, the convergence speed of test data becomes gentle, network performance reaches the best, and the error of test set and training set reaches the lowest, respectively, which are 0.3 and 0.22; CNN conducts the 1800 iterations. After 1800 iterations, the convergence speed of the test data becomes smooth, and when the test set and training set reach the minimum, they are 0.44 and 0.39, respectively.

5.5. Arc Artifacts

During image reconstruction, the ability of the algorithm to extract effective features is limited, and the edge of the image reconstruction result is quite different from the original image, so arc artifacts will appear. Taking LBP algorithm as an example, the reconstructed image and the generated arc artifacts are shown in Figure 7.

The prior data capacitance of image reconstruction is affected by the dielectric distribution information. The medium distribution is more uniform and the variation is smaller in the low-frequency part of the frequency domain; the variation is larger in the middle-/high-frequency part. The frequency of the artifact part is relatively low, and it will be divided into low frequencies. Therefore, through the multiscale dual-channel network reconstruction in this paper, the artifact area with small change can be effectively eliminated, as shown in Table 9 [27, 28].

6. Conclusion

Due to the different characteristics of ECT Image reconstruction algorithm, there is still a lack of high-precision image reconstruction algorithms with good adaptability to media distribution. To solve the problems of edge missing and edge artifacts in ECT Image reconstruction, this paper proposes an ECT Image reconstruction method based on a multiscale dual-channel convolutional neural network, which increases the information interaction between layers. To obtain more media distribution features, more statistical and structural attributes are captured to extract the frequency division features, and the multiscale space goes on. In addition to noise, it improves the robustness and accuracy of the reconstruction and effectively solves the problems of edge missing and image edge artifact. It is an effective image reconstruction algorithm, which provides a reference for subsequent research.

Data Availability

The data used to support the findings of this study are availability from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest in our work.

Acknowledgments

This work was sponsored by National Natural Science Foundation of China (61402126, 60572153, and 60972127), Nature Science Foundation of Heilongjiang province of China (F2016024), Heilongjiang Postdoctoral Science Foundation (LBH-Z15095), and University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province (UNPYSCT-2017094). This work was supported by China Natural Fund.