Research Article  Open Access
Yang Wang, Yin Lv, Dali Guo, Shu Zhang, Shixiang Jiao, "A Novel MultiInput AlexNet Prediction Model for Oil and Gas Production", Mathematical Problems in Engineering, vol. 2018, Article ID 5076547, 9 pages, 2018. https://doi.org/10.1155/2018/5076547
A Novel MultiInput AlexNet Prediction Model for Oil and Gas Production
Abstract
In the process of oilfield development, it is important to predict the oil and gas production. The predicted value of oil production is the amount of oil that may be obtained within a certain area over a certain period. Because of the current demand for oil and gas production prediction, a prediction model using a multiinput convolutional neural network based on AlexNet is proposed in this paper. The model predicts real oilfield data and achieves good results: increasing prediction accuracy by 17.5%, 20.8%, 11.6%, 8.9%, 6.9%, and 14.9% with respect to the backpropagation neural network, support vector machine, artificial neural network, radial basis function neural network, Knearest neighbor, and decision tree methods, respectively. It addresses the uncertainty of oil and gas production caused by the change in parameter values during the process of petroleum exploitation and has farreaching application significance.
1. Introduction
In recent years, owing to the persistent instability of international oil prices, shortage of oil and gas resources, high production costs, and other reasons, oilfields have higher requirements for the prediction of oil and gas production. At present, however, many methods for predicting oil and gas production still lack data integrity [1].
In the field of oil and gas production prediction, many scholars at home and abroad have adopted different methods to predict oil and gas production, including Zhou et al., who used a backpropagation (BP) neural network to predict oil and gas production [2], Pi et al., who used a radial basis function neural network to model a system and predict cumulative oil production [3], Huang et al., who used an artificial neural network to successfully predict natural gas production [4], and Xing et al., who can predict oil field production through a combination of artificial neural networks [5, 6], and have achieved good results. However, these methods are only applicable to vector data samples. To meet this condition, the inputs of the network models are the mean or variance of the original data, which reduced the effectiveness of features. In addition, the logging data of each depth and the fracturing data of each time interval are not considered. Moreover, some of these methods only consider geological factors and do not consider the characteristics of other stages in the mining process, which leads to incomplete data items and destroys the integrity of data. Besides, some of the current methods based on BP neural networks only improve the accuracy of the algorithm without considering the restoration of all the original data [7].
How to take all the raw data into consideration during production prediction is a difficult problem. To solve this problem while also improving the accuracy, this paper proposes a multiinput convolutional neural network AlexNetbased prediction model. In addition, some open multivariable systems have complex nonlinearities and uncertainties, and most current production prediction methods are based on a single input, without considering the effects of logging and fracturing on production level. As a result, traditional modeling methods include uncertainty with respect to theoretical analysis and practical application. Because the factors that influence oil and gas production are various, both logging and fracturing have a direct effect on production. Therefore, this paper considers the influence of both logging and fracturing parameters on oil and gas production. Logging data from different depths and fracturing data over different time intervals clearly influence oil and gas production. To ensure the authenticity of the data, this paper proposes a model based on a multiinput convolutional neural network AlexNet [8]. Logging and fracturing data from each well are treated as matrices instead of a single value in the model, and the neural network model is used to predict the development of coalbed methane. Previously, Duan et al. used a neural network to predict porosity [9]. This application demonstrated that convolutional neural networks have good potential for the processing of geological reservoir parameters.
The remainder of this paper is divided into seven parts: descriptions of the data preprocessing, Kmeans clustering, convolutional neural network theory, and the implementation of the AlexNetbased model are given in Sections 1–5, respectively. The experimental results are described in Section 6, and the paper is concluded in Section 7.
2. Data Preprocessing
2.1. Data Cleaning
The data used in this paper come from a certain region in Xinjiang that consists of a total of 156 wells. The experimental data include logging data, fracturing data, and other manually acquired and collated data (for details, refer to Section 6.1). The data include redundant, missing, and abnormal data. Hence, we clean the data to improve its quality. The main method used in data cleaning is deletion: deletion of observation samples and deletion of variables. The deleted variables and samples have little effect on the research objectives. At the same time, a consistency check is carried out to ensure the data are within their parameter ranges to eliminate outliers.
2.2. Normalization
2.2.1. (0,1) Standardization
Traversing each data value in each feature: the maximum and minimum values are recorded as Max and Min, respectively. Moreover, the difference between Max and Min is used to normalize the data [4] as follows:
2.2.2. ZScore Standardization
The data are also Zscore standardized [10] by determining the mean and standard deviation of the original data as follows:
2.2.3. Sigmoid Standardization
The sigmoid function is a function with an Sshaped curve, as shown in Figure 1, and it is hence a good threshold function. At (0, 0.5) center symmetry, there is a larger slope near the point (0, 0.5). When the data tends to positive and negative infinity, the output values are close to 1 and 0, respectively, and the threshold of the segmentation can be changed by changing the following formula:
3. KMeans Clustering Algorithm
Because of the lack of labels in the original data, if a threshold is automatically determined to classify high and low production, not only will this affect the accuracy of the results, but the results will also be less convincing. Moreover, for the late production of oil wells, exploitation is prioritized for highproduction wells whereas low production wells need manual intervention to improve their production among many other considerations. It is therefore necessary to classify high to low polymer production into three categories, labelling them as high, medium, and low yields so they can better assist decisionmaking.
Spatial clustering is an important method for partitioning or grouping spatial data. Objects are divided into several subsets according to a similarity criterion, minimizing the difference among the elements in the same subset and maximizing the difference among the elements in different subsets. The usual spatial clustering algorithm can be based on a variety of distances such as Euclidean or Manhattan distances. The most commonly used distance is Euclidean, which is calculated as follows:
In the above formula, and are two data objects of n dimensions.
In the Kmeans algorithm, the constant , which represents the final number of clusters, is determined. First, an initial point is randomly selected as the centroid, and the sample points are classified into the most similar classes by calculating the similarity between each sample and the centroid (using the Euclidean distance). Then, the centroid of each class is recalculated (that is, within the class). The above process is repeated until the center of mass is no longer changed, and the final category of each sample and the centroid of each class are hence determined [11–13]. The pseudocode for the implementation of this algorithm for this study is given below.
Input: the true value of the output of 156 wells.
Output: K clusters of the well outputs and the centroid of each cluster.
Steps:(1)Initialize constant K = 3, randomly selecting the initial points of the centers of mass of each cluster;(2)Repeat the following process until the centers of mass are no longer changed;(A)Calculate the similarity between the sample and each centroid and classify the samples into the most similar classes;(B)Recalculate the centroid;(3)Output the final centroid and members of each class;(4)End.
4. Convolutional Neural Network Principles
A convolutional neural network is an efficient pattern recognition method that has been developed in recent years. The basic structure of a convolutional neural network consists of two layers. The first is the feature extraction layer. In this layer, the input of each neuron is connected to the output of the previous layer and the feature is extracted. The second is the feature mapping layer. Each layer of the network is composed of multiple feature maps, each of which is a plane, and the weights of all the neurons on the plane are equal [14, 15].
4.1. Convolutional Layer
In the convolutional layer, the features extracted from the previous layer are convoluted with a convolution kernel, and the results form the feature of the layer after the output of the activation function. The convolution is calculated as follows:
Here, is the number of input matrices or the last dimension of a tensor, represents the th input matrix of the th layer [16], represents the th convolution kernel matrix of the th layer, and represents the corresponding element of the output matrix corresponding to convolution kernel of the th level.
4.2. Pooling Layer
The pooling layer compresses the characteristics of the input. On the one hand, it can make the features smaller and simplify the complexity of the network computation; on the other hand, we can compress the features and hence simply extract the main features. Pooling operations are generally divided into two categories: max pooling or average pooling [17]. To extract the data features objectively, we use average pooling in the proposed method because it is a pooling operation that is suitable for the related data of logging and fracturing.
5. Implementation of the AlexNetBased Model
In the multiinput convolutional neural network based on AlexNet, the most important thing is to splice the logging and fracturing data using the row dimensions. Because the dimensions of the logging and fracturing data are different, the methods of convolution and pooling are used to extract the characteristics of the input logging and fracturing data.
5.1. Introduction to the AlexNet Model
Limited by the size of the data, the classic AlexNet convolutional neural network model was proposed for the ImageNet competition [18], which is mainly used for highresolution and large image recognition. For oil and gas production prediction, the amount of data per well is small, so the classic AlexNet convolutional model is not suitable. Therefore, to predict of oil and gas production, we need to adjust the classic AlexNet convolutional model [19]. Well logging and fracturing both have a strong influence on oil and gas production, so to minimize the error between the predicted results and real results, the effect of logging and fracturing data on oil and gas production must be considered simultaneously. Therefore, a prediction model based on a multiinput convolutional neural network is proposed in this paper. The model is shown in Figure 2.
In the prediction model presented in Figure 2, first, the logging and fracturing data are separately input, and then three layers of convolution are carried out. After the third convolution layers, the data from the two processes are merged, and these data are used as the input of the AlexNet model. To avoid overfitting the model, dropout [20] is added after the fully connected layer of the model. Dropout randomly deletes a certain proportion of the hidden layer neurons at the beginning of each neuron unit training (the proportion is set manually in each situation, usually to 0.5), and the number of the input and output layers does not change. That is, dropout sets the output of a hidden node to zero with a certain probability during the training process, and when the weights are updated, the weight of that node is not updated, as follows:
Here, is a ()dimensional column vector, is a ()dimensional matrix, m is a ()vector, and is an excitation function satisfying . The multiplication of m and here is the multiplication of corresponding elements [18].
5.2. Parameter Setting of the Proposed Model
When the two input layers are merged in the AlexNetbased multiinput convolutional neural network, the two input layers are concatenated along the row dimension. The activation function of the convolutional layer and the first two fully connected layers uses rectified linear units (ReLU), which have the following activation function: The activation function of the last layer of all connected layer uses the Softmax function [5, 6].
The two most important steps in a convolutional neural network are the pooling and convolution operations. The main function of the pooling operation is to reduce the number of computation parameters, and the convolution operation is used to extract local features [21–23]. To adapt the model to the various sizes of input, the model for the well log data is defined as model_left, and its parameters are listed in Table 1. The model for the fracturing data is defined as model_right, and its parameters are listed in Table 2. There are 156 sets of input data in model_left. Each group represents the logging parameters of a well, and the size of each group is 416×45. Similarly, there are 156 sets of input data in model_right. Each group represents the fracturing parameters of a well, and the size of each group is 41,195×5.


After the convolution and pooling operations of model_left and model_right are completed, the data for well logging and fracturing are compressed into the same number of data rows, so the outputs of model_left and model_right are spliced along the row dimension. The model after the splicing is called the final model. The fully connected layer is implemented for the final model, and its parameters are listed in Table 3.

Finally, stochastic gradient descent (SGD) is used to train the model until convergence is achieved, resulting in the final prediction results of high, medium, and low oil and gas production.
6. Experimental Results and Analysis
6.1. Experimental Data
The data used in this experiment are derived from the real data of a coalbed methane development in a mining area in Xinjiang, China, These data were directly measured by field instruments. There are data for 156 wells. Each well has its own corresponding logging and fracturing data. In the log data, there are 46 characteristics, including DEPTH (depth), SP (spontaneous potential), GR (natural gamma ray count), LLS (shallow investigation induction log), LLD (deep investigation induction log), RHOB (invasion resistivity), CNL (compensated neutrons), and SH (shale content), among other data. In the fracturing data, there are five characteristics: sand ratio, discharge volume, discharge accumulation, sand accumulation, and casing pressure. Using the FS6 well as an example, we show some of the preprocessed logging data in Table 4 and the fracturing data in Table 5. In the experiment, a crossvalidation method is adopted using test and training sets. The training set is 80% of the data and the test set is the remaining 20% of the data. For preprocessing, the experimental data are converted into an NPY file format to facilitate reading and operation. The computing environment consisted of an Intel Core i5 CPU at 1.60 GHz and Python 3.5.2.


6.2. KMeans Clustering Results
The output data of 156 wells were clustered into three categories using Kmeans, and the results are shown in Figure 3.
In Figure 3, the 156 wells are clustered into three types according to their yield: red, green, and blue points indicate low, medium, and highyield wells, respectively.
6.3. Impact on Accuracy
6.3.1. Influence of the Number of Training Iterations on Accuracy
Figures 4, 5, and 6 show the relationship between the number of training iterations and the accuracy under (0,1) standardization, Zscore standardization, and Sigmoid standardization, respectively. After training, the prediction rate of the training approaches 100%. As training continues, the accuracy of the model on the test sample set continues to improve. This demonstrates the good performance of the convolutional neural network.
6.3.2. Influence of Data Normalization on Accuracy
The experiment evaluated the effects of three normalization methods on the accuracy of the resulting model. The three normalization methods are (0,1) standardization, Zscore standardization, and sigmoid function standardization. Table 6 shows the accuracy of the model with different normalization methods. Here, crossvalidation was used and the average results of 30 runs for 30 training times are shown.

Table 6 shows that when Zscore normalization is employed, the resulting model is the most accurate. The normalization method has a great influence on the accuracy because (0,1) standardization is vulnerable to the influence of outliers and has poor robustness. The Zscore normalization method is suitable for dealing with normally distributed data, and the data in this case conform to the standard normal distribution. When using the sigmoid function to normalize data, there is still much room for improvement in accuracy because of the limitation of data volume.
6.4. Comparison Experiment and Discussion
This experiment compared the proposed model with a model trained using the BP neural network. The BP neural network [2, 24–26], support vector machine (SVM) [27–29], artificial neural network (ANN) [4, 30–32], radial basis function neural network (RBFNN) [3, 33], Knearest neighbor (KNN) [34], and decision tree [34] methods are widely used in the field of oil and gas prediction. Logging data and fracturing data are averaged separately for each feature of each well. Spliced logging and fracturing data for the same well are used as the input of the BP, SVM, ANN, RBFNN, KNN, and decision tree methods. Table 7 compares the experimental results of the BP, SVM, ANN, RBFNN, KNN, and decision tree methods and the proposed multiinput convolutional neural network model based on AlexNet [35]. Crossvalidation is used and the program runs 30 times to obtain the average accuracy.

Table 7 shows that the proposed model has a certain advantage because of its complex network structure. Moreover, the BP, SVM, ANN, RBFNN, KNN, and decision tree cannot consider all the characteristics of each well in the data using the mean of each characteristic of each well as the input. In addition, the network structure of the BP neural network is relatively simple, and the processing effect of complex data is not obvious. SVM is more suitable for linear segmentation. ANN has strong robustness to noise data and can fully approximate the complex nonlinear relationship, but its training time is long and sometimes it does not even converge, so the accuracy rate is higher. Hence, there is still much room for improvement. RBFNN has strong generalization ability, high approximation accuracy, and fast convergence speed, and its result is better than that of the BP neural network. The KNN algorithm is simple in concept and mature in theory and can be used for nonlinear classification. In addition, the samples show that the number of categories is balanced, so KNN achieves a higher accuracy. Finally, the decision tree algorithm lacks stability, because small changes in training data may lead to different tree models. The proposed convolutional neural network based on AlexNet achieves higher accuracy on the basis of ensuring data integrity; in contrast, other models lose important information in the data because of the data averaging in the input, resulting in lower accuracy. At the same time, by increasing or reducing the number of layers in the network structure and varying the values of the parameters of each layer, it was found that the network layer introduced in Section 5.2 achieves the best results.
7. Conclusion
The implementation of the algorithm was slow to converge and encountered divergence. By optimizing the network input parameters and adjusting the step size, the convergence speed of the network can be accelerated. In addition, the data used in the experiments in this study are limited; that is, we used the data of only 156 wells. For a well, if we know the key parameters of the logging and fracturing data, we can predict its oil and gas production.
This study showed that a multiinput convolutional neural network model based on AlexNet can be used to predict oil and gas production, not only with high accuracy, but also with logging and fracturing data as the input. In contrast to the other methods, which predict oil and gas production by averaging the logging and fracturing data, a multiinput convolutional neural network model based on AlexNet can be used to consider the logging data of each well depth and fracturing data over each time interval. This allows each piece of data to retain as much original information as possible, increasing the diversity of information in the data and making the results more useful. The proposed approach achieved better results than other neural networks.
Data Availability
The data analyzed in this study are privately held for commercial reasons and hence cannot be made publicly available.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work is based on research work supported by the Education Department of Sichuan of China under Grant No. 2018RZ0093.
References
 X. G. Xia and Y. F. Yang, “Attribute synthetic evaluation model for the CBM recoverability and its application,” Mathematical Problems in Engineering, vol. 2015, Article ID 434583, 6 pages, 2015. View at: Publisher Site  Google Scholar
 C. Zhou and M. Liu, “Application research on oil production forecasting based on BP neural network,” Journal of Wuhan University of Technology, vol. 3, pp. 125–129, 2009. View at: Google Scholar
 L. Pi and J. Li, “The application of RBFNN in the modeling and forecast of an oil field accumulative output system,” in Computer and communications, vol. 19, pp. 92–94, 2001. View at: Google Scholar
 X. Huang and X. Geng, “Application of improved artificial neural network in prediction of natural gas production,” Journal of Civil Aviation University of China, vol. 15, no. 1, pp. 1213, 2004. View at: Google Scholar
 M. Xing, X. Chen, and Y. Wang, “Oilfield outputs combined forecast based on artificial neural networks,” Computer Simulation, vol. 21, no. 5, pp. 116–120, 2004. View at: Google Scholar
 M. Xing, X. Chen, and Y. Wang, “Studying methods for forecasting output in oilfield based on neural network,” Chinese Journal of Scientific Instrument, vol. 26, no. 8, pp. 60–62, 2005. View at: Google Scholar
 L. Tian, S. He, D. Gu et al., “Application of neural network technique for productivity evaluation in changqing gasfield,” Journal of Oil and Gas Technology, vol. 30, no. 5, pp. 106–109, 2008. View at: Google Scholar
 C. G. Monyei, A. O. Adewumi, and M. O. Obolo, “Oil well characterization and artificial gas lift optimization using neural networks combined with genetic algorithm,” Discrete Dynamics in Nature and Society, vol. 2, Article ID 289239, pp. 161–177, 2014. View at: Publisher Site  Google Scholar
 Y.X. Duan, G.T. Li, and Q.F. Sun, “Research on convolutional neural network for reservoir parameter prediction,” Journal on Communication, vol. 37, pp. 1–9, 2016. View at: Google Scholar
 H. Xiao and H. Cai, “Comparison study of normalization of feature vector,” Computer Engineering and Applications, vol. 45, no. 22, pp. 117–119, 2009. View at: Google Scholar
 L. Xinwu, “A new text clustering algorithm based on improved k_means,” Journal of Software, vol. 7, no. 1, pp. 95–101, 2012. View at: Google Scholar
 X. Li and H. Liu, “Greedy optimization for Kmeansbased consensus clustering,” Tsinghua Science and Technology, vol. 23, no. 2, pp. 184–194, 2018. View at: Publisher Site  Google Scholar
 S. Yang, Y. Li, and X. Hu, Optimizaion Study on k Value of Kmeans Algorithm, System Engineering Theory and Practice, vol. 26, 2006.
 Z.H. Zhao, S.P. Yang, and Z.Q. Ma, “License plate character recognition based on convolutional neural network LeNet5,” Journal of System Simulation, vol. 22, no. 3, pp. 638–641, 2010. View at: Google Scholar
 N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convolutional neural network for modelling sentences,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 655–665, Baltimore, Md, USA, June 2014. View at: Publisher Site  Google Scholar
 Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, “A Unified multiscale deep convolutional neural network for fast object detection,” in Proceedings of the European Conference on Computer Vision, pp. 354–370, 2016. View at: Publisher Site  Google Scholar
 X. Lu, X. Duan, X. Mao, Y. Li, and X. Zhang, “Feature extraction and fusion using deep convolutional neural networks for face detection,” Mathematical Problems in Engineering, vol. 2017, Article ID 1376726, 9 pages, 2017. View at: Publisher Site  Google Scholar
 J. Deng, W. Dong, and R. Socher, “ImageNet: a largescale hierarchical image database,” in Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, Miami, Fla, USA, June 2009. View at: Publisher Site  Google Scholar
 J. Jiao, F. Zhang, and L. Zhang, “Remote sensing estimation of rape planting area based on improved AlexNet model,” Computer Measurement & Control, vol. 2, 2018. View at: Google Scholar
 G. Hinton E, N. Srivastava, and A. Krizhevsky, “Improving neural networks by preventing coadaptation of feature detectors,” Computer Science, vol. 3, no. 4, pp. 212–223, 2012. View at: Google Scholar
 S. R. Folkes, O. Lahav, and S. J. Maddox, “An artificial neural network approach to classification of galaxy spectra,” Monthly Notices of the Royal Astronomical Society, vol. 283, no. 2, pp. 651–665, 2018. View at: Publisher Site  Google Scholar
 X. Yin and X. Liu, “Multitask convolutional neural network for poseinvariant face recognition,” IEEE Transactions on Image Processing, vol. 27, no. 2, pp. 964–975, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 B. Ahn, G. D. Choi, and J. Park, “Realtime head pose estimationusing multitask deep neural network,” in Robotics & Autonomous Systems, vol. 103, 2018. View at: Google Scholar
 H. Zhang and L. Zou, “The application of BP neural network in well lithology identification,” Geology and Prospecting, vol. 38, no. 6, pp. 63–65, 2002. View at: Google Scholar
 B. H. M. Sadeghi, “A BPneural network predictor model for plastic injection molding process,” Journal of Materials Processing Technology, vol. 103, no. 3, pp. 411–416, 2000. View at: Publisher Site  Google Scholar
 Z. A, “A BPneural network predictor model for plastic injection molding process,” Statistics and Decision, vol. 13, pp. 35–37, 2008. View at: Google Scholar
 N. Ru and Y. Jianhua, “An attribute reduction method based on rough set and SVM and with application in oilgas prediction,” in Proceedings of the 6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007; 1st IEEE/ACIS International Workshop on eActivity, IWEA 2007, pp. 502–506, Australia, July 2007. View at: Google Scholar
 X. Zhu, X. Yang, and Q. Zhang, “Application of LSSVMGA algorithm in oil production forecasting,” Coal Technology, vol. 29, no. 11, pp. 197198, 2010. View at: Google Scholar
 Z. Li, B. Liu, and T. Liu, “Application of support vector machine in oilfield production prediction,” Journal of Northeast Petroleum University, vol. 29, no. 5, pp. 9697, 2005. View at: Google Scholar
 S. Song, B. Hong, and B. Shi, “Research into calculation of natural gas well production based on an artificial neural network,” Petroleum Science, vol. 2, no. 3, pp. 413–421, 2017. View at: Google Scholar
 G. H. Roshani, S. A. H. Feghhi, A. MahmoudiAznaveh, E. Nazemi, and A. AdinehVand, “Precise volume fraction prediction in oilwatergas multiphase flows by means of gammaray attenuation and artificial neural networks using one detector,” Measurement, vol. 51, no. 1, pp. 34–41, 2014. View at: Publisher Site  Google Scholar
 G. H. Roshani, R. Hanus, A. Khazaei, M. Zych, E. Nazemi, and V. Mosorov, “Density and velocity determination for singlephase flow based on radiotracer technique and neural networks,” Flow Measurement and Instrumentation, vol. 61, pp. 9–14, 2018. View at: Publisher Site  Google Scholar
 A. K. Yadav, V. Sharma, H. Malik, and S. S. Chandel, “Daily array yield prediction of gridinteractive photovoltaic plant using relief attribute evaluator based Radial Basis Function Neural Network,” Renewable & Sustainable Energy Reviews, vol. 81, pp. 2115–2127, 2018. View at: Publisher Site  Google Scholar
 J. Y. Xue, Production Forecast for Daliudi. S1 Gas Reservoir, Chengdu University of Technology, 2011.
 M.Z. Wang, B. Wang, F.J. Sun et al., “Quantitative evaluation of CBM enrichment and high yield of Qinshui Basin,” Natural Gas Geoscience, vol. 28, no. 7, pp. 1108–1114, 2017. View at: Google Scholar
Copyright
Copyright © 2018 Yang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.