Abstract

In the process of oilfield development, it is important to predict the oil and gas production. The predicted value of oil production is the amount of oil that may be obtained within a certain area over a certain period. Because of the current demand for oil and gas production prediction, a prediction model using a multi-input convolutional neural network based on AlexNet is proposed in this paper. The model predicts real oilfield data and achieves good results: increasing prediction accuracy by 17.5%, 20.8%, 11.6%, 8.9%, 6.9%, and 14.9% with respect to the backpropagation neural network, support vector machine, artificial neural network, radial basis function neural network, K-nearest neighbor, and decision tree methods, respectively. It addresses the uncertainty of oil and gas production caused by the change in parameter values during the process of petroleum exploitation and has far-reaching application significance.

1. Introduction

In recent years, owing to the persistent instability of international oil prices, shortage of oil and gas resources, high production costs, and other reasons, oilfields have higher requirements for the prediction of oil and gas production. At present, however, many methods for predicting oil and gas production still lack data integrity [1].

In the field of oil and gas production prediction, many scholars at home and abroad have adopted different methods to predict oil and gas production, including Zhou et al., who used a backpropagation (BP) neural network to predict oil and gas production [2], Pi et al., who used a radial basis function neural network to model a system and predict cumulative oil production [3], Huang et al., who used an artificial neural network to successfully predict natural gas production [4], and Xing et al., who can predict oil field production through a combination of artificial neural networks [5, 6], and have achieved good results. However, these methods are only applicable to vector data samples. To meet this condition, the inputs of the network models are the mean or variance of the original data, which reduced the effectiveness of features. In addition, the logging data of each depth and the fracturing data of each time interval are not considered. Moreover, some of these methods only consider geological factors and do not consider the characteristics of other stages in the mining process, which leads to incomplete data items and destroys the integrity of data. Besides, some of the current methods based on BP neural networks only improve the accuracy of the algorithm without considering the restoration of all the original data [7].

How to take all the raw data into consideration during production prediction is a difficult problem. To solve this problem while also improving the accuracy, this paper proposes a multi-input convolutional neural network AlexNet-based prediction model. In addition, some open multivariable systems have complex nonlinearities and uncertainties, and most current production prediction methods are based on a single input, without considering the effects of logging and fracturing on production level. As a result, traditional modeling methods include uncertainty with respect to theoretical analysis and practical application. Because the factors that influence oil and gas production are various, both logging and fracturing have a direct effect on production. Therefore, this paper considers the influence of both logging and fracturing parameters on oil and gas production. Logging data from different depths and fracturing data over different time intervals clearly influence oil and gas production. To ensure the authenticity of the data, this paper proposes a model based on a multi-input convolutional neural network AlexNet [8]. Logging and fracturing data from each well are treated as matrices instead of a single value in the model, and the neural network model is used to predict the development of coalbed methane. Previously, Duan et al. used a neural network to predict porosity [9]. This application demonstrated that convolutional neural networks have good potential for the processing of geological reservoir parameters.

The remainder of this paper is divided into seven parts: descriptions of the data preprocessing, K-means clustering, convolutional neural network theory, and the implementation of the AlexNet-based model are given in Sections 15, respectively. The experimental results are described in Section 6, and the paper is concluded in Section 7.

2. Data Preprocessing

2.1. Data Cleaning

The data used in this paper come from a certain region in Xinjiang that consists of a total of 156 wells. The experimental data include logging data, fracturing data, and other manually acquired and collated data (for details, refer to Section 6.1). The data include redundant, missing, and abnormal data. Hence, we clean the data to improve its quality. The main method used in data cleaning is deletion: deletion of observation samples and deletion of variables. The deleted variables and samples have little effect on the research objectives. At the same time, a consistency check is carried out to ensure the data are within their parameter ranges to eliminate outliers.

2.2. Normalization
2.2.1. (0,1) Standardization

Traversing each data value in each feature: the maximum and minimum values are recorded as Max and Min, respectively. Moreover, the difference between Max and Min is used to normalize the data [4] as follows:

2.2.2. Z-Score Standardization

The data are also Z-score standardized [10] by determining the mean and standard deviation of the original data as follows:

2.2.3. Sigmoid Standardization

The sigmoid function is a function with an S-shaped curve, as shown in Figure 1, and it is hence a good threshold function. At (0, 0.5) center symmetry, there is a larger slope near the point (0, 0.5). When the data tends to positive and negative infinity, the output values are close to 1 and 0, respectively, and the threshold of the segmentation can be changed by changing the following formula:

3. K-Means Clustering Algorithm

Because of the lack of labels in the original data, if a threshold is automatically determined to classify high and low production, not only will this affect the accuracy of the results, but the results will also be less convincing. Moreover, for the late production of oil wells, exploitation is prioritized for high-production wells whereas low production wells need manual intervention to improve their production among many other considerations. It is therefore necessary to classify high to low polymer production into three categories, labelling them as high, medium, and low yields so they can better assist decision-making.

Spatial clustering is an important method for partitioning or grouping spatial data. Objects are divided into several subsets according to a similarity criterion, minimizing the difference among the elements in the same subset and maximizing the difference among the elements in different subsets. The usual spatial clustering algorithm can be based on a variety of distances such as Euclidean or Manhattan distances. The most commonly used distance is Euclidean, which is calculated as follows:

In the above formula, and are two data objects of n dimensions.

In the K-means algorithm, the constant , which represents the final number of clusters, is determined. First, an initial point is randomly selected as the centroid, and the sample points are classified into the most similar classes by calculating the similarity between each sample and the centroid (using the Euclidean distance). Then, the centroid of each class is recalculated (that is, within the class). The above process is repeated until the center of mass is no longer changed, and the final category of each sample and the centroid of each class are hence determined [1113]. The pseudocode for the implementation of this algorithm for this study is given below.

Input: the true value of the output of 156 wells.

Output: K clusters of the well outputs and the centroid of each cluster.

Steps:(1)Initialize constant K = 3, randomly selecting the initial points of the centers of mass of each cluster;(2)Repeat the following process until the centers of mass are no longer changed;(A)Calculate the similarity between the sample and each centroid and classify the samples into the most similar classes;(B)Recalculate the centroid;(3)Output the final centroid and members of each class;(4)End.

4. Convolutional Neural Network Principles

A convolutional neural network is an efficient pattern recognition method that has been developed in recent years. The basic structure of a convolutional neural network consists of two layers. The first is the feature extraction layer. In this layer, the input of each neuron is connected to the output of the previous layer and the feature is extracted. The second is the feature mapping layer. Each layer of the network is composed of multiple feature maps, each of which is a plane, and the weights of all the neurons on the plane are equal [14, 15].

4.1. Convolutional Layer

In the convolutional layer, the features extracted from the previous layer are convoluted with a convolution kernel, and the results form the feature of the layer after the output of the activation function. The convolution is calculated as follows:

Here, is the number of input matrices or the last dimension of a tensor, represents the -th input matrix of the -th layer [16], represents the -th convolution kernel matrix of the -th layer, and represents the corresponding element of the output matrix corresponding to convolution kernel of the -th level.

4.2. Pooling Layer

The pooling layer compresses the characteristics of the input. On the one hand, it can make the features smaller and simplify the complexity of the network computation; on the other hand, we can compress the features and hence simply extract the main features. Pooling operations are generally divided into two categories: max pooling or average pooling [17]. To extract the data features objectively, we use average pooling in the proposed method because it is a pooling operation that is suitable for the related data of logging and fracturing.

5. Implementation of the AlexNet-Based Model

In the multi-input convolutional neural network based on AlexNet, the most important thing is to splice the logging and fracturing data using the row dimensions. Because the dimensions of the logging and fracturing data are different, the methods of convolution and pooling are used to extract the characteristics of the input logging and fracturing data.

5.1. Introduction to the AlexNet Model

Limited by the size of the data, the classic AlexNet convolutional neural network model was proposed for the ImageNet competition [18], which is mainly used for high-resolution and large image recognition. For oil and gas production prediction, the amount of data per well is small, so the classic AlexNet convolutional model is not suitable. Therefore, to predict of oil and gas production, we need to adjust the classic AlexNet convolutional model [19]. Well logging and fracturing both have a strong influence on oil and gas production, so to minimize the error between the predicted results and real results, the effect of logging and fracturing data on oil and gas production must be considered simultaneously. Therefore, a prediction model based on a multi-input convolutional neural network is proposed in this paper. The model is shown in Figure 2.

In the prediction model presented in Figure 2, first, the logging and fracturing data are separately input, and then three layers of convolution are carried out. After the third convolution layers, the data from the two processes are merged, and these data are used as the input of the AlexNet model. To avoid overfitting the model, dropout [20] is added after the fully connected layer of the model. Dropout randomly deletes a certain proportion of the hidden layer neurons at the beginning of each neuron unit training (the proportion is set manually in each situation, usually to 0.5), and the number of the input and output layers does not change. That is, dropout sets the output of a hidden node to zero with a certain probability during the training process, and when the weights are updated, the weight of that node is not updated, as follows:

Here, is a ()-dimensional column vector, is a ()-dimensional matrix, m is a ()-vector, and is an excitation function satisfying . The multiplication of m and here is the multiplication of corresponding elements [18].

5.2. Parameter Setting of the Proposed Model

When the two input layers are merged in the AlexNet-based multi-input convolutional neural network, the two input layers are concatenated along the row dimension. The activation function of the convolutional layer and the first two fully connected layers uses rectified linear units (ReLU), which have the following activation function: The activation function of the last layer of all connected layer uses the Softmax function [5, 6].

The two most important steps in a convolutional neural network are the pooling and convolution operations. The main function of the pooling operation is to reduce the number of computation parameters, and the convolution operation is used to extract local features [2123]. To adapt the model to the various sizes of input, the model for the well log data is defined as model_left, and its parameters are listed in Table 1. The model for the fracturing data is defined as model_right, and its parameters are listed in Table 2. There are 156 sets of input data in model_left. Each group represents the logging parameters of a well, and the size of each group is 416×45. Similarly, there are 156 sets of input data in model_right. Each group represents the fracturing parameters of a well, and the size of each group is 41,195×5.

After the convolution and pooling operations of model_left and model_right are completed, the data for well logging and fracturing are compressed into the same number of data rows, so the outputs of model_left and model_right are spliced along the row dimension. The model after the splicing is called the final model. The fully connected layer is implemented for the final model, and its parameters are listed in Table 3.

Finally, stochastic gradient descent (SGD) is used to train the model until convergence is achieved, resulting in the final prediction results of high, medium, and low oil and gas production.

6. Experimental Results and Analysis

6.1. Experimental Data

The data used in this experiment are derived from the real data of a coalbed methane development in a mining area in Xinjiang, China, These data were directly measured by field instruments. There are data for 156 wells. Each well has its own corresponding logging and fracturing data. In the log data, there are 46 characteristics, including DEPTH (depth), SP (spontaneous potential), GR (natural gamma ray count), LLS (shallow investigation induction log), LLD (deep investigation induction log), RHOB (invasion resistivity), CNL (compensated neutrons), and SH (shale content), among other data. In the fracturing data, there are five characteristics: sand ratio, discharge volume, discharge accumulation, sand accumulation, and casing pressure. Using the FS-6 well as an example, we show some of the preprocessed logging data in Table 4 and the fracturing data in Table 5. In the experiment, a cross-validation method is adopted using test and training sets. The training set is 80% of the data and the test set is the remaining 20% of the data. For preprocessing, the experimental data are converted into an NPY file format to facilitate reading and operation. The computing environment consisted of an Intel Core i5 CPU at 1.60 GHz and Python 3.5.2.

6.2. K-Means Clustering Results

The output data of 156 wells were clustered into three categories using K-means, and the results are shown in Figure 3.

In Figure 3, the 156 wells are clustered into three types according to their yield: red, green, and blue points indicate low-, medium-, and high-yield wells, respectively.

6.3. Impact on Accuracy
6.3.1. Influence of the Number of Training Iterations on Accuracy

Figures 4, 5, and 6 show the relationship between the number of training iterations and the accuracy under (0,1) standardization, Z-score standardization, and Sigmoid standardization, respectively. After training, the prediction rate of the training approaches 100%. As training continues, the accuracy of the model on the test sample set continues to improve. This demonstrates the good performance of the convolutional neural network.

6.3.2. Influence of Data Normalization on Accuracy

The experiment evaluated the effects of three normalization methods on the accuracy of the resulting model. The three normalization methods are (0,1) standardization, Z-score standardization, and sigmoid function standardization. Table 6 shows the accuracy of the model with different normalization methods. Here, cross-validation was used and the average results of 30 runs for 30 training times are shown.

Table 6 shows that when Z-score normalization is employed, the resulting model is the most accurate. The normalization method has a great influence on the accuracy because (0,1) standardization is vulnerable to the influence of outliers and has poor robustness. The Z-score normalization method is suitable for dealing with normally distributed data, and the data in this case conform to the standard normal distribution. When using the sigmoid function to normalize data, there is still much room for improvement in accuracy because of the limitation of data volume.

6.4. Comparison Experiment and Discussion

This experiment compared the proposed model with a model trained using the BP neural network. The BP neural network [2, 2426], support vector machine (SVM) [2729], artificial neural network (ANN) [4, 3032], radial basis function neural network (RBFNN) [3, 33], K-nearest neighbor (KNN) [34], and decision tree [34] methods are widely used in the field of oil and gas prediction. Logging data and fracturing data are averaged separately for each feature of each well. Spliced logging and fracturing data for the same well are used as the input of the BP, SVM, ANN, RBFNN, KNN, and decision tree methods. Table 7 compares the experimental results of the BP, SVM, ANN, RBFNN, KNN, and decision tree methods and the proposed multi-input convolutional neural network model based on AlexNet [35]. Cross-validation is used and the program runs 30 times to obtain the average accuracy.

Table 7 shows that the proposed model has a certain advantage because of its complex network structure. Moreover, the BP, SVM, ANN, RBFNN, KNN, and decision tree cannot consider all the characteristics of each well in the data using the mean of each characteristic of each well as the input. In addition, the network structure of the BP neural network is relatively simple, and the processing effect of complex data is not obvious. SVM is more suitable for linear segmentation. ANN has strong robustness to noise data and can fully approximate the complex nonlinear relationship, but its training time is long and sometimes it does not even converge, so the accuracy rate is higher. Hence, there is still much room for improvement. RBFNN has strong generalization ability, high approximation accuracy, and fast convergence speed, and its result is better than that of the BP neural network. The KNN algorithm is simple in concept and mature in theory and can be used for nonlinear classification. In addition, the samples show that the number of categories is balanced, so KNN achieves a higher accuracy. Finally, the decision tree algorithm lacks stability, because small changes in training data may lead to different tree models. The proposed convolutional neural network based on AlexNet achieves higher accuracy on the basis of ensuring data integrity; in contrast, other models lose important information in the data because of the data averaging in the input, resulting in lower accuracy. At the same time, by increasing or reducing the number of layers in the network structure and varying the values of the parameters of each layer, it was found that the network layer introduced in Section 5.2 achieves the best results.

7. Conclusion

The implementation of the algorithm was slow to converge and encountered divergence. By optimizing the network input parameters and adjusting the step size, the convergence speed of the network can be accelerated. In addition, the data used in the experiments in this study are limited; that is, we used the data of only 156 wells. For a well, if we know the key parameters of the logging and fracturing data, we can predict its oil and gas production.

This study showed that a multi-input convolutional neural network model based on AlexNet can be used to predict oil and gas production, not only with high accuracy, but also with logging and fracturing data as the input. In contrast to the other methods, which predict oil and gas production by averaging the logging and fracturing data, a multi-input convolutional neural network model based on AlexNet can be used to consider the logging data of each well depth and fracturing data over each time interval. This allows each piece of data to retain as much original information as possible, increasing the diversity of information in the data and making the results more useful. The proposed approach achieved better results than other neural networks.

Data Availability

The data analyzed in this study are privately held for commercial reasons and hence cannot be made publicly available.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is based on research work supported by the Education Department of Sichuan of China under Grant No. 2018RZ0093.