With the rapid development of computer technology, some machine learning methods have begun to gradually integrate into the petroleum industry and have achieved some achievements, whether in conventional or unconventional reservoirs. This paper presents an alternative method to predict vertical heterogeneity of the reservoir utilizing various deep neural networks basing on dynamic production data. A numerical simulation technique was adopted to obtain the required dataset, which contains dynamic production data calculated under different heterogeneous reservoir conditions. Machine learning models were established through deep neural networks, which learn and capture the characteristics better between dynamic production data and reservoir heterogeneity, so as to invert the vertical permeability. On the basis of model validation, the results show that machine learning methods have excellent performance in predicting heterogeneity with the RMSE of 12.71 mD, which effectively estimated the permeability of the entire reservoir. Moreover, the overall AARD of the predictive result obtained by the CNN method was controlled at 11.51%, revealing the highest accuracy compared with BP and LSTM neural networks. And the permeability contrast, an important parameter to characterize heterogeneity, can be predicted precisely as well, with a derivation of below 10%. This study proposed a potential for vertical heterogeneity prediction in reservoir basing on machine learning methods.

1. Introduction

Reservoir heterogeneity is one of the important characteristics in reservoir description, which has a significant impact on fluid flow and oil recovery. At the same time, it provides fundamental information to build reliable reservoir models and plays a crucial role in reservoir development, especially the vertical heterogeneity, which is a key element in predicting the distribution of remaining oil in the reservoir. Permeability as a primary property for characterizing heterogeneity can show the fluids’ ability of flowing underground when subjected to applied pressure gradients.

Different types of data that can be directly obtained in oil fields have been studied and utilized to predict vertical permeability in reservoir. Deutsch [1] presented a consistent numerical modelling framework to obtain the vertical permeability basing on core data, conventional well logs, high-resolution image logs, and detailed geological interpretation. In addition, Russell et al. [2] utilized the existing High-Resolution Dipmeter Tool, Formation MicroScanner, and conventional log data to characterize and extrapolate geological heterogeneity. Moreover, Perez and Chopra [3] adopt the successive random addition technique combined with the data which comprise cores and logs of adjacent wells. The acquisition of vertical heterogeneity analyzed by cores and well log data on account of its intensive vertical resolutions can be seen. However, there are many limitations associated with the estimation of vertical permeability. On one hand, it is difficult to collect representative core measurements. On the other hand, the high viscosity of some fluids makes it impossible to perform well testing. Above all, cores and well logging data can reflect vertical permeability only at a certain location in the reservoir rather than characterizing the whole. Dynamic production data can be utilized to overcome these problems mentioned above, such as bottom hole pressure, oil production, and water cut data. This is because some characteristics of the entire reservoir can be reflected by the dynamic variation of these production data. Meanwhile, dynamic production data is also the most accessible and effective data we can get, which contains the information of vertical heterogeneity. Machine learning methods can learn and capture relative characteristics between dynamic production data and vertical permeability, making it possible to acquire vertical permeability using production data.

In recent years, with the development of artificial intelligence technology, a large number of machine learning methods are widely applied in the petroleum industry and have a better performance compared with traditional methods. Talebi et al. [4] estimated reservoir saturation pressure efficiently basing on the multilayer perceptron neural network (MLP) and radial basis function (RBF). Zhang et al. [5] adopted the long short-term memory neural network to predict water saturation distribution in reservoirs. Schuetter et al. [6] applied random forest (RF), support vector regression (SVR), and gradient-boosting machine (GBM) to establish a prediction model for oil production in unconventional shale reservoirs. Silpngarmlers et al. [7] learned different relative permeability curve data with the BP neural network method basing on a certain amount of papers and experiments, so as to develop a liquid/liquid and liquid/gas two-phase relative permeability predictors. Zhu et al. [8, 9] predicted total organic carbon content and proposed a TOC logging evaluation method by semisupervised learning. Moreover, the permeability of a tight gas reservoir was inversed utilizing a deep Boltzmann kernel extreme learning machine as well [10]. Tian and Horne [11, 12] combined machine learning with improved Pearson correlation coefficients to evaluate the connectivity among wells. Lim [13] provided the intelligent technique which comprised fuzzy logic and neural networks to obtain reservoir property estimation from well logs. In summary, machine learning methods have penetrated into various fields of the petroleum industry and achieved more success in all aspects.

The reason why machine learning methods [1416] are successful and have attracted widespread attention is that they can find the nonlinear relationship between multiple variables without physical models. This advantage is very suitable for solving some problems in the petroleum industry, which is caused by the complicated internal parameter relations. In this study, three neural network methods are used to learn and capture the relationship between the dynamic production data of reservoir and vertical permeability, including back propagation (BP) neural network, Convolution Neural Network (CNN), and long short-term memory (LSTM) neural network. The BP neural network has better performance in dealing with nonlinear mapping relations due to continuous improvement of its internal back propagation algorithm [17]. The reason why CNN can be widely applied is its unique data feature extraction approach, which has also achieved success in the field of oil reservoirs [18, 19]. Of course, this method of extracting features can also be used for dynamic production data. The LSTM neural network [20], as a variant of the traditional recurrent neural network, is better at dealing with typically time series problems.

This paper is composed of four aspects. First, we establish oil-water two-phase flow models by numerical simulation to calculate oil production, water cut, and pressure data under different heterogeneous reservoir and permeability contrast conditions, which are treated as characteristic dataset for machine learning. Secondly, machine learning models were established through different neural networks, including the BP neural network, CNN, and LSTM neural network. Thirdly, based on the dataset, various machine learning models are trained to learn and capture the characteristics between dynamic production data and vertical permeability. Finally, we compare the accuracy and the calculation time to verify the prediction performance of machine learning methods. The study provided an alternative way for quick prediction of vertical permeability utilizing machine learning methods basing on dynamic production data.

2. Methodology

Three deep neural networks are utilized to establish machine learning models which can capture characteristics between dynamic data and vertical heterogeneity in reservoirs, including CNN, BP, and LSTM neural networks.

2.1. BP Neural Network

The BP neural network [21, 22] is constructed by the structure of multilayer perceptron combined with the back propagation algorithm. In view of the continuous improvement of the back propagation algorithm, the BP neural network can show better performance when capturing nonlinear mapping relationships between variables. There are two stages in the operation process of the BP neural network: forward propagation and backward propagation. Forward propagation is a learning process of neural networks that captures the changing characteristics of dynamic production data and outputs the prediction results of vertical permeability. The learning result is memorized by weights and thresholds in each neuron of neural networks. After that, loss function is used to calculate the error between the model prediction and the actual vertical permeability. Finally, based on the back propagation algorithm, the error is utilized to adjust and update the weights and thresholds in the neurons, completing one training session of the neural network. By continuously repeating the above two stages, the neural network will be constantly trained so that the predicted value can gradually approach the real permeability.

The BP neural network structure designed in this study is shown in Figure 1. The input layer of the model is composed of water cut, oil production, and water injection pressure data, each of which contains 1080 neurons. In order to fully capture the changing characteristics of data, the hidden layer is designed as three layers, with 50 neurons in each layer. Finally, the number of neurons in the output layer is 5, representing the vertical permeability of the five small layers in the simulated reservoir.

2.2. Convolution Neural Network

The convolutional neural network [2325], as one of the widely used deep learning algorithms, consists of the following five parts: the input layer, convolution layer, pooling layer, fully connected layer, and the output layer. The biggest advantage of this network is that it can extract local data features through convolution and pooling operations. Therefore, it can have a good performance when the data points entered as features are particularly large. The network structure of CNN designed in this study is shown in Figure 2. First, in order to facilitate the extraction of the features of the data by the convolution kernel, the reshaping of the data is required. The oil production (data format: ), water cut (), and water injection pressure data () were integrated into a dataset (). The dataset was reshaped to form a new data format (). Then, two convolutional layers have been added to extract the data features, with a number of 64 and 128 convolution kernels, respectively. The size of the convolution kernel in the two convolutional layers is and , respectively. At the same time, the max pooling function and a kernel size of were adopted in the two pooling layers. At last, the fully connected layer was composed of three hidden layers with 30 neurons per layer.

2.3. LSTM Neural Network

LSTM [26, 27], a variant of the recurrent neural network (RNN), has inherited most of the characteristics of RNN [28]. The emergence of LSTM solves the problem of gradient vanishing easily caused by RNN during training. It adds 3 “gate units” to judge and process the input information basing on the structure of RNN. There is a memory called “cell state” in the LSTM neural network, which could store information in the past like production information, water cut, and pressure data for months. Due to the existence of “gate units” and “cell state,” LSTM has gradually become a research hotspot in the field of machine learning in recent years. In this experiment, the input data of LSTM is composed of oil production, water cut, and injection pressure (); the time step designed in the network is 100.

The first key functional gate is the “forget gate layer.” This gate determines which part of information should be thrown away from the cell state, as shown in Equation (1)-(2):

Another functional gate is named “input gate layer” which determines what new information in input data of this time step needs to be stored in the cell state, as follows:

Finally, the last gate which controls the output results of every step is named “output gate layer.” It generates the output information based on the updated cell state and input data and , as follows: where is the input data in one single step, is the output data of this step, represents the output data of the last step which is also inputted again into the network, and is the activation function.

2.4. Model Evaluation

Model evaluation is of great significance not only for the training process of the neural network but also for the prediction stage after training. In each training process of the neural network, the root mean square error (RMSE) is often used as a loss function to test each training result of the model. Because of the existence of the loss function, the results of the machine learning model can better approximate the expected real value. After the training is completed, some statistical test criteria will be applied to verify the predictive performance of the machine learning model, including average relative deviation (ARD) and average absolute relative deviation (AARD) [29, 30]. The relevant formula is given as follows: where represents total number of data in each set, is the data value we expected from each set, and is the corresponding data value calculated by the neural network.

3. Procedure

3.1. Dataset Collection

Machine learning methods require a large amount of sample data for model learning and training. In this study, we need dynamic production data obtained under reservoir conditions with different vertical permeability. However, there are two main limitations for actual development data that are not suitable for machine learning. On the one hand, due to the complexity of the geological conditions, the average permeability of each small layer in the vertical direction of the reservoir is difficult to be accurately evaluated and tested. On the other hand, as a result of human interference and some limitations of monitoring equipment, dynamic production data tends to have more noise points and missing values, which will have a great impact on the training of the model. Therefore, numerical simulation technology is a good means of collecting dataset.

This study takes a five-point well pattern as an example, and the established reservoir model is shown in Figure 3. The reservoir is vertically divided into five small layers, each of which is set with different permeability to simulate heterogeneity. In the horizontal direction, the reservoir is considered homogeneous. A three-dimensional grid structure with the size of is established to simulate a reservoir of (ft). According to the heterogeneity of the reservoir, a total of 400 sample reservoirs were designed, covering the MIP reservoir (100 samples), MDP reservoir (100 samples), IRP reservoir (160 samples), and homogeneous reservoir (40 samples). Taking the MIP reservoir as an example, the permeability of the first layer is the smallest, the permeability of the fifth layer is the largest, and the middle layers are arranged according to the arithmetic progression (e.g., the permeability of layers 1 to 5 is 20, 40, 60, 80, and 100 (mD)). The permeability of the first layer ranges from 20 (mD) to 200 (mD), with a step size of 20 (mD) per step. The permeability of the fifth layer is increased by a value between (20, 200) on the basis of the first layer, with a step size of 20 (mD) per step as well. Basing on the oil-water two-phase flow model, this experiment used the classic IMPES [31, 32] method to obtain 400 sets of sample data, each of which contained the vertical permeability of the reservoir and the dynamic production data under the corresponding reservoir conditions. The duration of this simulation is 1080 days, and model-specific parameters are shown in Table 1.

The dynamic data contains a large number of characteristics that can reflect the vertical heterogeneity of the reservoir. Take the water cut curve as an example, as shown in Figure 4. Under the condition that the average permeability of the entire reservoir is equal, different heterogeneous reservoirs are simulated by changing the permeability of each small layer. The average permeability of all reservoirs in the figure is 150 mD. And the MIP reservoir is composed of 5 small layers with permeability of 50 mD, 100 mD, 150 mD, 200 mD, and 250 mD in turn. The distribution of the MDP reservoir is exactly opposite to that of the MIP reservoir. A reservoir in which the permeability of all small layers is set to 150 mD is selected as one of the representatives of the homogeneous reservoir. The permeability of layers 1 to 5 in the IRP reservoir is 100 mD, 150 mD, 200 mD, 150 mD, and 100 mD. It can be clearly seen that the most obvious characteristic shown by the water cut data is the breakthrough time of water. Under the same average permeability, the breakthrough time in the MIP reservoir is the earliest compared to other reservoirs. In addition, the water cut curves of reservoirs with different heterogeneity have diverse changing characteristics. In order to verify the accuracy of the model, we also draw a comparison chart of water saturation as shown in Figure 5, which describes the water saturation distribution of the different reservoirs at the 1080th day. The first, third, and fifth layers of the five small layers were selected to make it easier to observe the features. It can be seen that the small layer with low permeability (first layer of the MIP reservoir and fifth layer of the MDP reservoir) has oil displacement effect better in the MDP reservoir, which is consistent with the actual law.

3.2. Training Process

First, we divided the obtained dataset consisting of 400 sets of sample data into a training set and a test set according to a certain proportion [33]. In each dataset, dynamic production data is used as input to the model, which facilitates the neural network to capture the characteristics of the data. The output of the model is vertical permeability, which is the predicted result of the neural network after learning. Then, basing on the input and output of the training set, the neural network continuously updates the weights and thresholds through the back propagation algorithm to train the machine learning model. Since the machine learning model has the memory of training set data, it is necessary to utilize the test set to verify the accuracy of the model prediction. Input of the test set is imported as the new data into the trained model to obtain the predicted result, which will be compared with the output of the test set to verify the validity of the model. Finally, the machine learning models generated by different neural networks will be comprehensively compared in various aspects in order to select the optimal model. In this experiment, the partition ratio of dataset is 8 : 2. There are 320 groups of sample data in the training set and 80 groups of sample data in the test set. The data flow is shown in Figure 6.

4. Results and Discussion

4.1. Model Calibration

There are two main factors that affect the accuracy of machine learning prediction. On the one hand, the quality of data plays a crucial role because the model needs to learn and capture the characteristics of data. On the other hand, the structure of the model has a significant influence on the results, such as the setting of hidden layers. In the process of model training, the principle of the back propagation algorithm is basing on the chain rule, and the number of hidden layers is related to the complexity of the derivation in the chain rule. In theory, the more layers are, the better the neural network can simulate complex operations and nonlinear mapping among variables. However, if there are too many hidden layers, the problem of gradient disappearance and gradient explosion will be triggered in the calculation process, resulting in waste of computing resources and inaccurate prediction. Therefore, it is necessary to calibrate the number of hidden layers in the machine learning model. In this study, other parameters are fixed, and the AARD value calculated by the model is used to observe the influence of different layers in the hidden layer on the prediction accuracy.

It can be seen from Figure 7 that in the case of the convolutional neural network, when the number of layers is increased to 2 layers, the AARD value of the model showed a significant decline, from 21.01% to 14.83%. By continuing to increase the number of layers to 3 layers, the AARD value dropped to 11.41%, indicating that the accuracy of the forecast was still improving. However, as the number of layers became 4 or 5, AARD was basically stable compared with 3 layers and no longer changes evidently. Based on the comprehensive consideration of accuracy and computational resources, the hidden layer structure of 3 layers is selected to better predict the vertical permeability of the reservoir.

4.2. Model Comparison and Error Analysis

As described in Section 2.4, the loss function is utilized to estimate the overall error of the model after each training. Therefore, we can supervise the training of the model in general through the decline curve of loss function. The loss function chosen for this experiment is RMSE.

As shown in Figure 8, the horizontal axis represents training times of the neural network, and the vertical axis represents the error of vertical permeability calculated by the loss function after each training. The overall trend of the loss function curve acquired by three different neural networks is decreasing, which indicates that the prediction accuracy is improving with the increase of training times. In terms of the test set, the final error of the LSTM model is basically stable at around 51 mD, demonstrating unsatisfactory prediction performance. The loss function curve obtained by the BP neural network shows a strong volatility, and the loss is finally fixed at 24.18 mD, illustrating that the network has higher prediction accuracy. Compared with the above two methods, the loss of the prediction result by the CNN model is the smallest, as low as 12.71 mD. In general, both CNN and BP network have shown better predictive performance on vertical permeability, while the accuracy of the LSTM model is not ideal.

By observing the decline curve of the loss function, we can clearly grasp the change of the average error from the overall samples. The neural network was trained utilizing the training set data and had the memory for the data. Therefore, the test set data was used to verify the accuracy of the model prediction. Then, for the prediction of each sample data in the test set, a crossplot drawn from the true permeability of the test set and the predicted vertical permeability generated by the different neural networks can be used to verify the accuracy of the model.

As shown in Figure 9, the horizontal axis is the permeability from each sample in the test set, and the vertical axis is the permeability calculated by the machine learning methods. The red line represents a straight line of . Ideally, the intersection of predicted and the true values should be located on this red line. As far as the BP neural network is concerned, most of the intersections can be concentrated around the red line, but some points still have strong discreteness, indicating that the model is inaccurate in some cases. It can be seen from the CNN model that almost all the scattered points can be appropriately distributed near the red line, showing better prediction performance on vertical permeability.

Based on the above crossgraphs, we can intuitively grasp the prediction capabilities of different machine learning models. Next, in order to describe more accurately the error of each point, the relative deviation diagram is drawn by calculating the relative error of all sample data in the test set.

Figure 10 is a graph of relative deviations, where the abscissa is the sample permeability and the ordinate is the relative deviation between the model predicted value and the samples. The blue dashed line indicates , which represents an ideal situation where the predicted value is exactly equal to the samples. For CNN, the relative deviation of most predicted values is less than 20%, mainly concentrated near the line of , which keeps a high prediction accuracy. In addition, with the increase of permeability, the relative error of the CNN also gradually decreases. In contrast, the BP neural network performs higher deviations, both under low and high permeability conditions. Finally, we calculate the overall error of samples from various aspects according to the statistical methods mentioned in Section 2.4. The specific results are shown in Table 2.

From the evaluation results shown in Table 2, the ARD and AARD calculated by CNN’s prediction results are only 6.28% and 11.51%, respectively. At the same time, the RMSE can be controlled at 12.71 mD, which is the lowest compared to the BP and LSTM neural network. It can be concluded that the unique extraction method for the data in the CNN model plays an important role in inverting the vertical permeability of the reservoir. Moreover, the LSTM model that performs very well in dealing with time series problems is given high expectations in this experiment because the input data of this study is related to time series. However, the prediction effect of this model is extremely unsatisfactory. This reason is because the output of the LSTM model also needs to be relevant to the time series. Finally, it can be seen from the comparison of time that after the machine learning model is trained, the time required for prediction is extremely short, only about 1 second.

4.3. Prediction of Permeability Contrast

Permeability contrast describes the ratio of the maximum value to the minimum value in the vertical permeability of the reservoir, which is of great significance for oil field development. The above CNN model can inverse the permeability of different layers of the reservoir in each sample from the test set. A crossplot is drawn to verify the accuracy of the model’s prediction of the permeability contrast.

As shown in Figure 11, the abscissa is the permeability contrast calculated for each sample of the test set, and the ordinate is the corresponding contrast predicted by the CNN model. Almost intersections are located on the center of the line , with a derivation of below 10%. At the same time, we calculated the AARD and RMSE of the permeability contrast, which are 9.58% and 0.534, respectively, further illustrating the excellent performance of the CNN model in predicting the permeability contrast.

5. Conclusion

This paper proposed an alternative method for predicting vertical heterogeneity of reservoirs through machine learning basing on dynamic production data. First, numerical simulation techniques were adopted to obtain dynamic production data under different heterogeneous reservoir conditions. Next, different neural network models were established and trained to capture characteristics between dynamic data and vertical permeability. Finally, reservoir permeability can be accurately inverted by the trained machine learning models. Through a comparative analysis of the prediction results, the following conclusions can be drawn.

The machine learning model showed excellent predictive performance on vertical permeability with the RMSE of 12.71 mD, which effectively estimated the permeability of the entire reservoir rather than a certain position compared with traditional methods. On the basis of model validation, the overall AARD of the predictive result obtained by the CNN method was controlled at 11.51%, which was lower than BP and LSTM network in calculation error. At the same time, the prediction time of the three neural networks is extremely short at about 1 second. Therefore, CNN can be selected as the optimal model through the comprehensive analysis of accuracy and prediction time. Finally, the machine learning method can also be utilized to predict the permeability contrast with a derivation of below 10%, showing a high accuracy under diverse heterogeneous reservoir conditions.

Data Availability

The manuscript is a self-contained data article; the entire data used to support the findings of this study are included within the article. If any additional information is required, this is available from the corresponding author upon request to [email protected].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This research was funded by the Fundamental Research Funds for the Central Universities of China (Grant No. FRF-TP-19-005B1) and National Natural Science Foundation of China (Grant No.51974357).