Complexity

Complexity / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 4918058 | https://doi.org/10.1155/2020/4918058

Lili Wang, Xiao Liu, Deyun Chen, Hailu Yang, Chengdong Wang, "ECT Image Reconstruction Algorithm Based on Multiscale Dual-Channel Convolutional Neural Network", Complexity, vol. 2020, Article ID 4918058, 12 pages, 2020. https://doi.org/10.1155/2020/4918058

ECT Image Reconstruction Algorithm Based on Multiscale Dual-Channel Convolutional Neural Network

Academic Editor: tila Bueno
Received10 Jun 2020
Revised02 Aug 2020
Accepted03 Sep 2020
Published14 Sep 2020

Abstract

For the problems of missing edges and obvious artifacts in Electrical Capacitance Tomography (ECT) reconstruction algorithms, an image reconstruction method based on a multiscale dual-channel convolutional neural network is proposed. Firstly, the image reconstructed by Landweber algorithm is input into the convolutional neural network, and four scales are selected for feature extraction. Feature unions are used across the scales to fuse the information of the output layer with feature maps. To improve the imaging accuracy, two frequency channels are designed for the input image. The middle layer of the network consists of two fully convolutional structures. Convolutional layers and jump connections are designed separately for different channels, which greatly improves the network’s ability to extract feature information and reduces the number of feature maps required for each layer. The number of network layers is shallow, which can speed up the network training, prevent the network from falling into local optimum, and ensure the effective transmission of image details. Simulation experiments are carried out for four typical dual media distributions. The edges of the reconstructed image are smoother and the image error is smaller. It effectively resolves the lack of edges in the reconstruction image and reduces the image edge artifacts in the ECT system.

1. Introduction

In the 1980s, researchers proposed electrical capacitance tomography (ECT) based on medical image CT, which has been widely used in the industrial field [1]. The principle of this method is to use electrodes to measure the capacitance change of the area, to reconstruct by image reconstruction algorithm, and to use two-dimensional or three-dimensional images for visual output. In the process of electrical capacitance tomography, an image reconstruction algorithm is a key problem, which directly affects the accuracy of imaging [2]. At present, the classical methods of image reconstruction include LBP [3], Landweber iterative method [4, 5], Tikhonov regularization algorithm, singular value decomposition (SVD), algebraic reconstruction technology (ART), extreme learning machine (ELM) [6], and convolutional neural networks (CNN) [7]. LBP algorithm is a qualitative algorithm with a simple algorithm and fast reconstruction speed, but limited imaging accuracy. Landweber iterative algorithm and ART are iterative algorithms, which have effects on improving iterative stability and controlling noise [8]. Compared with LBP and SVD noniterative algorithms, LBP algorithm has some improvement in imaging accuracy, but it has disadvantages such as edge missing and large artifacts in processing complex media distribution. Tikhonov regularization algorithm will lose certain image information, so the quality of the reconstructed image needs to be further improved; the algorithm has fast imaging speed and simple principle, but low imaging accuracy; CNN can extract image edge information better through its unique convolution feature extraction ability, so this paper applies the improved CNN network to ECT image reconstruction on this basis. In [9], an electrical capacitance tomography image reconstruction algorithm based on improved particle swarm optimization combined with Landweber algorithm is proposed to ensure that the algorithm has strong global optimization ability at the beginning stage, and the new algorithm significantly improves the subjective and objective quality of the reconstructed image; in [10], an image reconstruction method based on fast convergent convolutional neural network (fast convergent convolutional neural network, FCCNN) is proposed. The efficiency and quality of image reconstruction are improved; in [11], a fusion method of electrical capacitance tomography reconstruction image based on wavelet transform is proposed; in [12], an ECT Image Reconstruction Algorithm based on SVM decision tree is proposed to predict the phase number of unknown phases and reconstruct the image. The algorithm can realize the self-adaptive phase number of multiphase flow and improve the image reconstruction accuracy; in [13], an ECT Image reconstruction method based on the particle filter is proposed, and it is an effective and accurate method for ECT Image reconstruction, which provides a new way and means for ECT Image Reconstruction Technology.

To solve the problems of edge missing and artifact obvious and extract the characteristic value more comprehensively, this paper proposes an ECT Image Reconstruction Algorithm Based on a multiscale dual-channel convolutional neural network. The network adopts four different convolutional kernels, dual-channel frequency division processing, a jumping connection between scales, speeding up the network training speed, preventing the network from falling into local optimum, and ensuring the graph like the effective transmission of detailed information.

2. The Basic Principle of Electrical Capacitance Tomography

A typical 12-electrode ECT system consists of a sensor system, capacitance data acquisition, and image reconstruction system, as shown in Figure 1 [14]. The capacitance sensor is composed of three parts: insulating pipe, measuring electrode, and grounding shield. The data is collected by the data acquisition system and transferred to a computer for image reconstruction.

The number of typical capacitance plates includes 8 electrodes or 12 electrodes. The capacitance matrix between each plate can be expressed bywhere indicates the electrode pair in the tested area about the capacitance value between and , indicates the section of the area to be tested, represents the relative dielectric constant distribution of the internal medium in the area to be measured, and represents the sensitivity distribution of the section to be measured. After normalization, the following mathematical models can be obtained inwhere is the sensitivity distribution matrix and and represent the normalized dielectric constant distribution vector and capacitance vector, respectively. The process of retrieving the dielectric constant distribution from the known capacitance and sensitive field data is the image reconstruction process.

3. ECT Image Reconstruction Based on Multiscale Dual-Channel Convolutional Neural Network

3.1. The Theory of Convolution Neural Network

The convolutional neural network can find the hidden mapping relationship between data from training data through a large number of learning samples and can extract features [15]. The neural network is a kind of supervised parameter correction algorithm. By training the data many times, the error between the output result of the network and the real result can be calculated, and the error can be reduced finally by correcting the parameters and modifying the model. The process is divided into two main stages: the first stage is network forward propagation, data input to the network, layer by layer transformation, and mapping through the hidden layer to the output layer; the second stage is backpropagation, which further optimizes the parameters using the marked original data. The training process of convolution network is as follows [16].

3.1.1. Forward Propagation

After convoluting the input eigengraph of the previous layer, several different convolution kernels are combined to get the eigengraph of the convolution layer. The characteristic diagram of the layer convolution is shown aswhere previous level feature map representing input aggregate, , represents the offset term of the characteristic graph in the layer, represents the weight of the characteristic graph in the layer, and represents convolution.

According to the principle of regional correlation, the amount of data can be reduced by downsampling, and effective information can be retained at the same time. feature maps in the feature map of the upper layer correspond to feature values of the lower sampling layer. The characteristic figure of the downsampling layer is shown aswhere represents the downsampling function, sum all pixels on the block of the input image, and the output image will shrink n times in 2 dimensions. is the multiplicative bias of the characteristic graph , and is the additive bias of the characteristic graph .

3.1.2. Backpropagation

The weights are updated in the learning process of backpropagation. The neuron weight is given by

The weights of neurons in the downsampling layer are given bywhere is the rate of learning. is the cost function of the training process, which can be described as follows:where is the number of image pixels, N is the number of samples per batch, is the label of the pixel corresponding to the training sample, and is the actual output corresponding to the pixel of the training sample.

For two inputs, if the number of channels is the same and convolution operation is needed, the channel can share the same convolution kernel after calling function. Since the convolution kernel of each output channel is independent and only a single channel is considered, the output channel of the function iswhere is the output quantity of channel characteristic diagram, and are channel characteristic graphs of two inputs respectively, and represents the combining coefficient.

3.2. Convolution Neural Network Model Based on a Multiscale Dual-Channel Convolution Kernel

Based on the deep study of the principle of the convolutional neural network, this paper designs the structure of the multiscale dual-channel convolutional neural network [17], as shown in Figure 2.

3.2.1. Dual-Channel Frequency Division

In order to extract the frequency characteristics corresponding to the pixel value of each image pixel, this paper proposes a mathematical model that transforms the image pixel value into a frequency characteristic and space correspondence and introduces a frequency measurement matrix. In this paper, a receptive field matrix is used to measure the image frequency, and the Gaussian weighted standard deviation of the pixel value in the area is taken as the frequency characteristic of the center point. Compared with the convolution kernel with a weight of 1, the standard deviation of the pixel value in the receptive field area is obtained through Gaussian kernel weighting, which can more highlight the influence of the center point. Fill in the position with no pixel value in the boundary range to calculate the standard deviation of the receptive field, map the whole image to a matrix, and then obtain the corresponding frequency characteristics of the global pixels. According to the frequency measurement matrix, the corresponding frequency characteristics of the global pixels can be obtained. For the convenience of classification, the measurement matrix of the image is normalized to the range of 0∼1, and the data are divided into three regions: 0–0.3, 0.3–0.7, and 0.7–11, corresponding to three areas of low, medium, and high frequency, respectively, of which low frequency is one channel, and mid- and high frequency are another channel. Calculate the marker matrix, and use the marker matrix to divide the original image into images of corresponding frequencies. Corresponding to the image in the background area, the main area, and the detail area, there may be cases where four or more detailed areas will achieve more obvious effects. For this article, we did not do too much research. The relationship between the normalized value and the pixel point is shown in Figure 3.

The expression of the two-dimensional Gaussian kernel function here is shown inwhere x and y are coordinates of space points, are the space center positions, is the standard deviation, and is the weight value corresponding to point.

The weight of Gauss convolution kernel is set as follows:

3.2.2. Jump Connect

Due to a large amount of image reconstruction information in the if and HF channels, the five-layer jump convolution neural network is used, while the low-frequency channel image reconstruction information is less. The four-layer jump convolution neural network is used, as shown in Figure 2. The middle layer of the network consists of two full convolution structures. The convolution structure of the medium-/high-frequency channel includes five convolution layers and four jump connections. The convolution structure of the low-frequency channel includes four convolution layers and three jump connections. The convolution kernels of , , , , and are used in the five-layer convolution of the medium-/high-frequency channel. The convolution kernels of can capture the detail area of the image and have a few parameters to reduce the computational complexity. convolution kernel is used to combine the output of the first layer with the output of the second layer, and the result is used as the input of the third layer; convolution kernel is used to combine the output of the first four layers as the input of the fifth layer, and all the character images are normalized. In the same way, the convolution kernels of , , , and are used in the low-frequency four-layer convolution layer, respectively. Finally, the two-channel results are reconstructed to get image reconstruction results.

This connection method improves the ability of the network to extract feature information, reduces the number of required feature images in each layer, and the number of network layers is relatively shallow, which can speed up network training, prevent the network from falling into local optimum, ensure the effective transmission of image details, and finally get the image reconstruction results.

The multiscale convolution operation is as follows:where represents the group of convolution kernels in the layer, Fsi represents the group of characteristic images output in the layer, and represents the input image.

The detailed parameters of the core part of the network are shown in Tables 1 and 2. Table 1 shows the parameters of medium- and high-frequency channels, and Table 2 shows the parameters of low-frequency channels. The detailed parameters of the core part of the network are shown in Tables 1 and 2. Table 1 shows the parameters of medium- and high-frequency channels, and Table 2 shows the parameters of low-frequency channels. Conv size represents the size of the convolution kernel, padding represents the size of filling boundary, stride represents the step length of convolution kernel moving once, and bias represents bias.


Convolution layerInput size-output sizeConv sizePaddingStrideBias

Conv11–501True
Conv25–711True
Conv312–721True
Conv414–731True
Conv526–111True


Convolution layerInput size-output sizeConv sizePaddingStrideBias

Conv11–301True
Conv23–311True
Conv36–321True
Conv49–111True

3.2.3. Multiscale Reconstruction

Using multiscale convolution to reconstruct the network can not only extract the features related to the distribution of media but also improve the reconstruction effect between channels [18, 19]. To illustrate the effectiveness of multiscale convolution kernel features and determine the size of convolution kernel, 10 groups of experiments were carried out using core flow, and MSE (mean square error) after 1000 iterations was used as the measurement index to evaluate the network performance, as shown in Table 3.


Experiment numberTypeConvolution kernel sizeAverage MSE value

1Standard convolution0.4462

2Single scale0.4462
30.4698
40.5523
50.6113
60.6273

7Multiscale, , 0.4522
8, , 0.5241
9, , 0.5946
10, , , 0.2408

MSE is defined as follows:where Oi represents the gray value of the pixel in the image distributed with preset media, Pi represents the gray value of the pixel in the reconstructed image, and is the number of pixels in the image. Using the optimizer and backpropagation algorithm to adjust the network parameters to minimize MSE, the weight update process is given bywhere is the number of iterations of the network, is the loss function, is the learning rate, s is the number of layers, is the weight update value at the iteration of the layer, is the weight update value at the iteration of the layer, is the weight at the kth iteration of the layer, and is the weight at the iteration of the layer.

In Experiment 1 of Table 3, the multiscale kernel convolution unit is set as the standard convolution unit with kernel size of ; in experiment 2–6, the two branches of the multiscale kernel convolution unit are designed with the same convolution kernel; in experiment 7–10, the convolution kernel of the two branches is changed. It can be seen from the data of experiments 1–6 that the MSE value of the network using convolution kernel is the smallest; in experiments 2–10, the MSE value of experiment 10 is the smallest, and the image reconstruction effect is the best.

Based on the above analysis, four optimal scales, i.e., , , and are selected, and the MSE calculated by multiscale kernel convolution are smaller than that calculated by standard convolution.

Compared with multilayer single-scale neural network, using the image reconstruction network model with multiscale convolution structure to extract image information can not only automatically extract the local features of the image feature map but also further learn the multiscale information of two-dimensional feature map, and its automatic implicit extraction method also simplifies the complex calculation of image reconstruction; at the same time, building multiscale module can increase the layer, and the information interaction between layers can improve the nonlinear ability of the network, speed up the training speed, and obtain better image reconstruction results [11].

The multiscale convolution network designed in this paper has the following advantages:(1)Increase feature diversity: multiscale convolution feature fusion is an improvement of the convolution network, which solves the problems of network structure which is not wide enough, model parameters are large, and feature extraction scale is single.(2)Improve the ability of feature extraction: through multilayer convolution, the nonlinear ability is improved, the higher dimension feature is automatically extracted, and the feature extraction and fitting ability of the model is improved. For features extracted by small-scale convolution kernel, increasing its scale convolution kernel can enrich information, improve image reconstruction effect, and improve the accuracy of model description [20].(3)Provide cross-layer information flow transmission: each convolution layer in the convolution network has different receptive fields. The image reconstruction network can extract multiple-scale representation vectors. The small convolution kernel has a small receptive field and pays more attention to the details, which is beneficial to the image details; the large convolution kernel expands the node receptive field and pays more attention to the macro and line, which is beneficial to the image overall information reconstruction. Convolution kernels of different scales can complement multiscale information, to effectively improve the information transfer effect between layers [21].

4. Algorithm Implementation Steps

After determining the new neural network model structure of dual-channel frequency division, jump connection, and multiscale reconstruction, data preprocessing and algorithm implementation are needed before the next simulation experiment.

4.1. Pretreatment

This algorithm first uses Landweber algorithm to reconstruct the image and then turns it into a picture as the input of the network. The preprocessing process of converting 10000 groups of capacitance matrix into pictures is as follows:(1)The capacitance values of 10000 groups and the distribution matrix of four typical medium distributions, core flow, laminar flow, circulation, and multidrop flow, are extracted and normalized.(2)According to the optimization theory, the minimum target of ECT Image reconstruction is obtained from equation (1):According to the definition of a vector norm, the objective function can be written as follows:The gradient of is given byAccording to the principle of the steepest descent method, taking the negative gradient direction as the search direction of optimization, the iterative formula of ECT Image reconstruction is as follows:where is the optimal step size. The selection of this parameter is very important, if the value is too large, it is not consistent with the original problem; if the value is too small, it cannot play the effect of regularization, so it is selected by experience in practical application. The selection of iteration step size plays an important role in the iterative process. If the selection is too small, the convergence speed will be slow; if the selection is too large, it will lead to no convergence. In order to reduce the number of iterations and ensure the imaging quality of the iterative method, the least square method is selected to constrain the iteration, and the iteration step size is selected to minimize the sum of squares of errors. The iteration times of core flow, circulation, laminar flow, and multidrop flow are 80, 20, 80, and 250, respectively. Compared with traditional Landweber, the selection of numerical value has higher imaging accuracy and faster convergence speed. As for iteration number and step size, the choice of methods for different models is detailed in document [22]. This paper is limited in length and will not be repeated here.(3)The iterative results are normalized and used as the initial input of the network model.

4.2. Algorithm Implementation

The traditional convolution neural network adopts one input layer, two convolution layers, two secondary sampling layers, one full connection layer, and one output layer. Image reconstruction is realized by single-layer convolution [23]. In this paper, a multiscale dual-channel idea is introduced to improve the network [20, 24]. In the preprocessing process, the initial input image is obtained by Landweber algorithm, and all the input images are divided into frequency, which is input into the multiscale dual-channel convolutional neural network for image reconstruction. The detailed steps are as follows:(1)Landweber algorithm is used to reconstruct the image as the input of the network(2)The input image size is 900 pixels, and each image is divided into three frequency division results: high, medium, and low(3)The images of different channels are convoluted by corresponding network structure, then the output of the two channels is combined by calling the concat function, and a matrix is obtained by full-connection training, which maps 900 pixels of the pipeline, completes the forward propagation process of the convolution neural network, and obtains the forward output result of the network(4)The forward output of the network is compared with the original medium distribution, the error value, and gradient value of each layer of the network which are calculated, the network parameters are modified, and the iteration is repeated until the training error condition is met(5)The trained network model is used for image reconstruction

5. Simulation Experiment

5.1. Dataset

To verify the effectiveness of the proposed algorithm, the 12-electrode ECT system is selected as the research object. The pipeline is divided into 900 pixels, and the dielectric constant of the two-phase flow is set to 1 and 3, respectively. Four typical media, core flow, laminar flow, circulation, and multidrop flow, are selected to distribute 2500 groups of capacitance values, totaling 10000 groups. 8000 groups of samples are randomly selected from the training set, and the test set is another 2000 groups of samples. Each sample is input into the network after preprocessing by Landweber algorithm. Through the Matlab simulation experiment, some data examples are shown in Figure 4. The Landweber algorithm is used to process the generated medium distribution; the result is shown in Figure 5. Figures 5(a)–5(d) are core flow, laminar flow, circulation, and multidrop flow.

5.2. Training Strategy
5.2.1. Training Batch

The number of training samples of deep learning is large, and the parameters are updated in the way of a single sample iteration, which is easy to fall into the local optimum and limits the generalization ability of the model. If the training samples are divided into several batches to update the parameters, the model can converge quickly when the sample size is large. Therefore, the number of samples per batch is very important for the average processing rate of the model. If the number of batch samples is small, the training time of the model increases, and it is easy to fall into the local optimum; if the number of batch samples is large, the optimal direction of the model parameters cannot be guaranteed. In this paper, the number of batch samples is tested, and the average processing rate is introduced:

Since the convergence error of the network during training is 0.22, 0.22 is set as the key indicator of whether the image is reconstructed successfully. The experimental results are shown in Table 4. It can be seen that the best average processing rate and average training time of the training set can be achieved when the number of batch processing is 32.


Number of batch samplesAverage processing rate ∗ 100%Average training time (s)
TrainTest

1699.1298.8624.28
3299.4799.3921.56
4099.0898.6818.32
6499.3298.9416.83
8099.0198.3215.44
12898.6997.9417.72
25698.2197.7519.84
51296.9394.8619.58

5.2.2. Optimizer and Learning Rate Test

For different deep learning models and imaging tasks, it is very important to choose the right optimizer and the best learning rate to improve the training speed and processing rate of the model. Considering the network model and image reconstruction tasks constructed in this paper, as well as the influence of different learning rates on the convergence rate, SGD, Adam and Adadelta optimizers [25] were used to carry out 10 experiments, respectively, with the average processing rate of the training set and test set as the evaluation index (unit: %), and the experimental results are shown in Table 5, in which the data marked in bold are the best imaging of each optimizer result.


Learning rate0.00010.0010.010.11

OptimizerTrainTestTrainTestTrainTestTrainTestTrainTest
SGD10.547.7410.329.5728.4238.2694.1295.2695.7395.52
Adam95.0495.4296.5695.7991.4095.6611.8310.868.6910.34
Adadelta95.2195.5796.7796.0096.4495.1713.589.949.8010.71

It can be seen that the experimental results using Adadelta and Adam optimizer are similar, and the best imaging results can be achieved when the learning rate is 0.001, but with the increase of the learning rate, the imaging quality has declined seriously. The SGD optimizer has the opposite effect. When the learning rate is low, the imaging result is poor, and the best effect is achieved when the learning rate is 1. The Adadelta optimizer can give consideration to the highest recognition rate of the training set and test set. Therefore, the Adadelta optimizer is selected and the optimization rate is set to 0.001.

5.3. Other Parameter Settings
5.3.1. Activation Function

In this paper, PReLU (Parametric Rectified Linear Unit) is selected as the activation function of the network structure. Except for the last layer, the end of every convolution layer and anticonvolution layer is connected to this function. The formula can be described as follows:where is the positive interval input signal of layer and is the weight coefficient of the negative interval of the layer .

The final output of the convolution layer is given bywhere is the offset of the layer.

5.3.2. Network Training Parameters

The network learning rate, weight attenuation rate display period, and other settings in the network training parameters are shown in Table 6.


NameValue

Learning rate0.001
Weight attenuation rate0.0001
Gradient cut rate0.1
Number of iterations1000
Training batch size8
Verify batch size8
Number of worker threads4
Display cycle10
Model save cycle200

5.4. Experimental Results and Analysis

To evaluate the quality of the reconstructed image, the relative image error and correlation coefficient are used as the evaluation parameters. The correlation coefficient refers to the relationship between the real dielectric constant distribution and the reconstructed image. The smaller the relative error and the larger the correlation coefficient of the reconstructed image, the better the quality of the reconstructed image. The calculation formulas are as equations (21) and (22) and are compared with LBP and Landweber:where e is the dielectric constant distribution of the original image, that is, the gray value of the original image and its average value is recorded as ; is the gray value of the reconstructed image by the algorithm, and its average value is recorded as . The correlation between the reconstructed image and the original image can be calculated.

Based on the four typical models, the imaging area is divided into 900 pixels using a circular grid, and the LBP algorithm, Landweber algorithm, CNN, and the algorithm in this paper are used for simulation, and the results are compared, as shown in Table 7.


Original imageLBP algorithmLandweber algorithmCNNOur algorithm

Core flow
Laminar flow
Circle flow
Multidrop flow

The image relative error and correlation coefficient obtained from the experimental results are compared with the previous research results, as shown in Table 8. It can be seen from Table 8 that, for the four selected models, the experimental results show that, compared with the traditional LBP algorithm, Landweber algorithm, and the image reconstruction algorithm improved by many scholars, the algorithm in this paper can display the size, shape, edge, and other information of defects more clearly, and the image relative error is reduced and the image correlation coefficient is improved. Compared with the traditional CNN image reconstruction algorithm, the image correlation coefficient and relative error of the algorithm optimized by this paper are greatly improved.


AlgorithmThe relative error of imageImage correlation coefficient
Core flowLaminar flowCircle flowMultidrop flowCore flowLaminar flowCircle flowMultidrop flow

ELM [6]0.30080.54750.24750.27860.20310.80160.33600.3575
SVD [26]0.24470.16260.31020.26650.93400.88000.77600.8840
ART [26]0.23960.17190.29570.23400.94100.85800.82400.9090
Tikhonov [26]0.20450.17780.21180.29480.94000.85700.83600.7410
CNN [10]0.23900.35400.28300.29680.87880.79240.70000.7847
Landweber0.27340.36380.29670.31320.94040.88760.84330.9107
LBP0.25420.37350.30450.53720.93060.88160.81930.6381
Our algorithm0.19760.09910.21050.19960.96280.94170.95640.9536

It can be seen from Figure 6 that the algorithm in this paper and CNN’s relationship between the number of iterations and MSE are compared. Before the 1600 iterations, the MSE value of the algorithm in this paper drops rapidly. After the 1600 iterations, the convergence speed of test data becomes gentle, network performance reaches the best, and the error of test set and training set reaches the lowest, respectively, which are 0.3 and 0.22; CNN conducts the 1800 iterations. After 1800 iterations, the convergence speed of the test data becomes smooth, and when the test set and training set reach the minimum, they are 0.44 and 0.39, respectively.

It can be seen from Figure 6 that the algorithm in this paper and CNN’s relationship between the number of iterations and MSE are compared. Before the 1600 iterations, the MSE value of the algorithm in this paper drops rapidly. After the 1600 iterations, the convergence speed of test data becomes gentle, network performance reaches the best, and the error of test set and training set reaches the lowest, respectively, which are 0.3 and 0.22; CNN conducts the 1800 iterations. After 1800 iterations, the convergence speed of the test data becomes smooth, and when the test set and training set reach the minimum, they are 0.44 and 0.39, respectively.

5.5. Arc Artifacts

During image reconstruction, the ability of the algorithm to extract effective features is limited, and the edge of the image reconstruction result is quite different from the original image, so arc artifacts will appear. Taking LBP algorithm as an example, the reconstructed image and the generated arc artifacts are shown in Figure 7.

The prior data capacitance of image reconstruction is affected by the dielectric distribution information. The medium distribution is more uniform and the variation is smaller in the low-frequency part of the frequency domain; the variation is larger in the middle-/high-frequency part. The frequency of the artifact part is relatively low, and it will be divided into low frequencies. Therefore, through the multiscale dual-channel network reconstruction in this paper, the artifact area with small change can be effectively eliminated, as shown in Table 9 [27, 28].


LBP algorithmLandweber algorithmCNNOur algorithm

Core flow
Laminar flow
Circle flow
Multi drop flow

6. Conclusion

Due to the different characteristics of ECT Image reconstruction algorithm, there is still a lack of high-precision image reconstruction algorithms with good adaptability to media distribution. To solve the problems of edge missing and edge artifacts in ECT Image reconstruction, this paper proposes an ECT Image reconstruction method based on a multiscale dual-channel convolutional neural network, which increases the information interaction between layers. To obtain more media distribution features, more statistical and structural attributes are captured to extract the frequency division features, and the multiscale space goes on. In addition to noise, it improves the robustness and accuracy of the reconstruction and effectively solves the problems of edge missing and image edge artifact. It is an effective image reconstruction algorithm, which provides a reference for subsequent research.

Data Availability

The data used to support the findings of this study are availability from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest in our work.

Acknowledgments

This work was sponsored by National Natural Science Foundation of China (61402126, 60572153, and 60972127), Nature Science Foundation of Heilongjiang province of China (F2016024), Heilongjiang Postdoctoral Science Foundation (LBH-Z15095), and University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province (UNPYSCT-2017094). This work was supported by China Natural Fund.

References

  1. C. Y. Tsai and Y. C. Feng, “Real-time multi-scale parallel compressive tracking,” Journal of Real-Time Image Processing, vol. 16, no. 6, pp. 2073–2091, 2019. View at: Publisher Site | Google Scholar
  2. Y. L. Zhao, B. L. Guo, and Y. Y. Yan, “Latest development and analysis of electrical capacitance tomography technology,” Chinese Journal of Scientific Instrument, vol. 33, no. 8, pp. 1909–1920, 2012. View at: Google Scholar
  3. L. G. Bai and X. H. Duan, “Application of improved LBP algorithm in image retrieval,” Computer Engineering and Design, vol. 40, no. 6, pp. 1671–1675, 2019. View at: Google Scholar
  4. X. Chen, X. X. Gao, and T. C. Fang, “Comparison of algebraic reconstruction technology and simple iterative reconstruction technology in electrical capacity tomography image reconstruction,” Journal of Xi’an Jiaotong University, vol. 45, no. 4, pp. 25–29, 2011. View at: Google Scholar
  5. Chen Y and D. Y. Chen, “Improved Runge-Kutta type landweber image reconstruction algorithm for electrical capacitance tomography system,” Electric Machines and Control, vol. 18, no. 7, pp. 107–112, 2014. View at: Google Scholar
  6. L. F. Zhang and Y. F. Zhu, “Application of extreme learning machine in electrical capacitance tomography,” Electric Measurement and Instrument, vol. 11, no. 20, pp. 1–7, 2019. View at: Google Scholar
  7. A. Martínez Olmos, E. Castillo, F. Martínez-Martí, D. P. Morales, A. García, and J. Banqueri, “An imaging method for electrical capacitance tomography based on projections multiplication,” Journal of Physics: Conference Series, vol. 307, no. 1, Article ID 012032, 2011. View at: Publisher Site | Google Scholar
  8. J. Lei, J. H. Qiu, and S. Liu, “Dynamic reconstruction algorithm for electrical capacitance tomography based on the proper orthogonal decomposition,” Applied Mathematical Modelling, vol. 39, no. 22, pp. 6925–6940, 2015. View at: Publisher Site | Google Scholar
  9. C. M. Yan, G. Y. Lu, D. L. Zhang, and J. S. Dong, “An image reconstruction algorithm for electrical capacitance tomography images based on improved partical swarm optimization,” Computer Engineering and Science, vol. 41, no. 5, pp. 879–884, 2019. View at: Google Scholar
  10. J. Zheng, J. K. Li, Y. Li, and L. H. Peng, “A benchmark dataset and deep learning-based image reconstruction for electrical capacitance tomography,” Sensors, vol. 18, no. 11, Article ID 3701, 2018. View at: Publisher Site | Google Scholar
  11. L. Y. Li, Y. Kong, and D. Y. Chen, “A new ECT image reconstruction algorithm based on convolutional neural network,” Journal of Harbin University of Science and Technology, vol. 22, no. 4, pp. 28–33, 2017. View at: Google Scholar
  12. P. Wang, J. S. Lin, and M. Wang, “An image reconstruction algorithm for electrical capacitance tomography based on simulated annealing particle swarm optimization,” Journal of Applied Research and Technology, vol. 13, no. 2, pp. 197–204, 2015. View at: Publisher Site | Google Scholar
  13. X. J. Wu, G. X. Huang, and J. W. Wang, “Application of particle filtering algorithm to image reconstruction of ECT,” Optics and Precision Engineering, vol. 20, no. 8, pp. 1824–1830, 2012. View at: Publisher Site | Google Scholar
  14. Y. Yang and L. Peng, “Data pattern with ECT sensor and its impact on image reconstruction,” IEEE Sensors Journal, vol. 13, no. 5, pp. 1582–1593, 2013. View at: Publisher Site | Google Scholar
  15. X. Q Huang, S. J He, R. Y. Zhou, and Z. W. Huang, “An adaptive phase number image reconstruction algorithm based on DTSVM,” Measurement and Control Technology, vol. 38, no. 3, pp. 21–25, 2019. View at: Google Scholar
  16. L. F Zhang, Y. J. Zhai, and X. G. Wang, “Application of Barzilai-Borwein gradient projection for sparse reconstruction algorithm to image reconstruction of electrical capacitance tomography,” Flow Measurement and Instrumentation, vol. 65, pp. 45–51, 2019. View at: Publisher Site | Google Scholar
  17. M. G. Finley and T. Bell, “Two-channel depth encoding for 3D range geometry compression,” Applied Optics, vol. 58, no. 25, Article ID 6882, 2019. View at: Publisher Site | Google Scholar
  18. T. Q. Peng and F. Li, “Image retrieval based on deep convolutional neural networks and binary hashing learning,” Journal of Electronics and Information Technology, vol. 38, no. 8, pp. 2068–2075, 2016. View at: Google Scholar
  19. A. Prabhu, J.-C. Gimel, A. Ayuela, S. Arrese-Igor, J. J. Gaitero, and J. S. Dolado, “A multi-scale approach for percolation transition and its application to cement setting,” Scientific Reports, vol. 8, no. 1, 2018. View at: Publisher Site | Google Scholar
  20. J. Zheng and L. Peng, “A deep learning compensated back projection for image reconstruction of electrical capacitance tomography,” IEEE Sensors Journal, vol. 20, no. 9, pp. 4879–4890, 2020. View at: Publisher Site | Google Scholar
  21. Y. Q. Kang, S. Liu, and J. Liu, “Image reconstruction algorithm for electrical capacitance tomography based on data correlation analysis,” Flow Measurement and Instrumentation, vol. 62, pp. 113–122, 2018. View at: Publisher Site | Google Scholar
  22. Y. H. Feng, X. R. Cao, and M. Y. He, “Study of the measurement of two-phase flow based on the modified Landweber algorithm,” Industrial Instrumentation and Automation, no. 2, pp. 68–71, 2011. View at: Google Scholar
  23. Y. R. Hu, “Electrical capacitance tomography image reconstruction based on GA-BP neural network,” Electronic Technology and Software Engineering, no. 23, pp. 83-84, 2018. View at: Google Scholar
  24. B. Zhou, J. Y. Zhang, C. L. Xu, and S. M. Wang, “Image reconstruction in electrostatic tomography using a priori knowledge from ECT,” Nuclear Engineering and Design, vol. 241, no. 6, pp. 1952–1958, 2011. View at: Publisher Site | Google Scholar
  25. H. C. Wang, I. Fedchenia, S. L. Shishkin, A. Finn, L. Smith, and M. Colket, “Image reconstruction for electrical capacitance tomography exploiting sparsity,” IEEE Transactions on Instrumentation and Measurement, vol. 64, no. 1, pp. 89–102, 2014. View at: Google Scholar
  26. G. Q. Zhu, H. M. Yang, J. Li, and P. Yang, “Comparison of image reconstruction algorithms of electrical capacitance tomography,” Electronic Science and Technology, vol. 31, no. 1, pp. 1–6, 2019. View at: Google Scholar
  27. L. F. Zhang and Y. J. Song, “Image reconstruction for electrical capacitance tomography based on Barzilai-Borwein gradient projection for sparse reconstruction algorithm,” Acta Metrologica Sinica, vol. 40, no. 4, pp. 631–635, 2019. View at: Google Scholar
  28. L. Lu, G. W. Tong, G. Guo, and S. Liu, “Split Bregman iteration based reconstruction algorithm for electrical capacitance tomography,” Transactions of the Institute of Measurement and Control, vol. 41, no. 9, pp. 2389–2399, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Lili Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views332
Downloads183
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.