#### Abstract

For the purpose of achieving more effective prediction of the absolute gas emission quantity, this paper puts forward a new model based on the hidden recurrent feedback Elman. The recursive part of classic Elman cannot be adjusted because it is fixed. To a certain extent, this drawback affects the approximation ability of the Elman, so this paper adds the correction factors in recursive part and uses the error feedback to determine the parameters. The stability of the recursive modified Elman neural network is proved in the sense of Lyapunov stability theory, and the optimal learning rate is given. With the historical data of mine actual monitoring to experiment and analysis, the results show that the recursive modified Elman neural network model can effectively predict the gas emission and improve the accuracy and efficiency of prediction compared with the classic Elman prediction model.

#### 1. Introduction

In the daily management of mine safety, an effective method of prevention and control of mine gas disasters is the scientific analysis of the gas emission data provided by the monitoring system [1]. The gas is one of the most important factors threatening the safety production of mine [2]. Most recent work mainly focuses on different methods for improving the prediction performance of the absolute gas emission quantity, such as Grey theory [3], principal component regression analysis method [4], partial least squares support vector machine [5], virtual state variables and Kalman filter [6], BP neural network [7], and RBF neural network [8].

In recent years, intelligent computing methods have been rapidly developed in dynamic system identification [9, 10], time series prediction [11, 12], and other fields. In fact, there are many factors influencing the absolute gas emission quantity, such as coal seam gas content, burying depth, and coal seam thickness [13, 14]. That means the gas emission prediction model is a multidimensional complex dynamic system, and it is difficult to accurately predict gas emission quantity, since recurrent neural network is a highly nonlinear dynamical system that exhibits complex behaviors and good ability of processing dynamic information [15]. As is well known, the recurrent neural network has wide applications in various areas [16, 17]. It is expected that recurrent neural network possesses better performance than feedforward neural network (such as BP and RBF) in modeling and predicting gas emission quantity. In particular, Elman neural network (ENN) has been proved successful in gas emission prediction [18, 19]. Some works on improving the performance of gas emission prediction using ENN can be found in, for example, [20–22]. However, a common drawback of the above gas emission prediction models based on classic Elman neural network is that the recursive part of the hidden layer cannot be adjusted because it is fixed. This drawback affects the nonlinear approximation ability of classic Elman neural network.

From the above observation, this paper proposes a novel strategy of adding correction factors in recursive part of ENN, resulting in a new model called recursive modified Elman neural network (RMENN). The stability and convergence of RMENN model are theoretically proved, and some meaningful results are obtained in this paper. In practice, through the analysis of the main factors affecting coal gas emission, this paper puts forward the gas emission prediction model based on recursive modified Elman neural network.

The rest of this paper is organized as follows. The establishment of RMENN model is described in Section 2. The learning algorithms of RMENN model are described in Section 3. The performance analysis and flowchart of RMENN model are described in Section 4. Experiment analysis results on the gas emission prediction are presented in Section 5. Finally, the paper is concluded in Section 6.

#### 2. Establishment of RMENN Model

As discussed in Section 1, we aim to propose a specific architecture to overcome the aforementioned drawback of a fixed structure and improve the nonlinear approximation ability of ENN. In Figure 1, this paper adds correction factors in the context layer and the output layer to adjust the values of the recursive parts. Let and denote the network input and output vectors at the discrete time , respectively. Let denote weight matrices of context-hidden, input-hidden, hidden-output, and output-hidden, respectively. Let and denote the output vectors of the context layer and the hidden layer at the discrete time , respectively. Let be the correction part of the hidden context layer. Let be the correction part of the output context layer. and are the activation functions, respectively. In a general way, is the sigmoid function and is the linear function. Let be the output vector of the output context layer at the discrete time .

With this feature, the new model called RMENN is able to improve update power of the classic ENN and exhibit rapid convergence and high prediction accuracy. The relationship between input and output of RMENN can be expressed aswhere are, respectively, feedback factor and correction factor of the context layer. are, respectively, feedback factor and correction factor of the output layer. In particular, when , the special model is the classic Elman neural network [11].

The topology of the RMENN is shown in Figure 1.

#### 3. Learning Algorithm for RMENN

The main objective of learning algorithm is to minimize a predefined energy function by adaptively adjusting the vector of network parameters based on a given set of input-output pairs. The particular energy function used in the RMENN is as follows:where is the desired output associated with the input pattern and is the inferred output at the discrete time .

The weights of the RMENN are updated by the negative gradient of the energy function; those arewhere are the learning rate of , respectively. With the learning-rate parameters , the terms and are calculated as follows:where and are calculated as follows:

#### 4. Performance Analysis

##### 4.1. Convergence and Stability

The appropriate learning rate can make the learning algorithm converge at a faster speed. According to the Lyapunov stability theory, can be detailedly proved, and the proof of is similar to .

Theorem 1. *Let the weights of RMENN be updated by (3)–(9).**(1) If , the iterative learning process of is a stable and convergent process by (5).**(2) If , the iterative learning process of is a stable and convergent process by (4).**(3) If , the iterative learning process of is a stable and convergent process by (3).**(4) If , the iterative learning process of is a stable and convergent process by (6).*

*Proof. *(1) Let the energy function be described by (2).

Sincewherethenwhere is a matrix and is 2-norm.

Sinceaccording to (8) with the initial conditionwe can get the following equation:Since , we can get the following conclusions: ThereforeSince , for and , we can ensure that is a stable and convergent process.

Proof (4) It is similar to the proof (1).

Sinceaccording to (9) with the initial conditionwe can get the following equation:Let .

Then and .

Hence,Since , for and , we can ensure that is a stable and convergent process.

The proof of is similar to .

The proof of theorem is completed.

##### 4.2. Adaptive Learning Rate of RMENN

As explained before, we can get the optimal learning rate as follows.

Let , is the minimum negative value, and the convergence speed of RMENN is the fastest.

Let ; we can get the optimal learning rate as follows:Similarly,where are the optimal adaptive learning rate of , respectively.

The training algorithm procedures of RMENN are shown in Figure 2.

#### 5. Model Test

##### 5.1. Data Selection and Preliminary Analysis

China is the larger coal consumer among the developing countries. China’s secure producing situation of the coalmine is very grim, especially the accident of gas disasters, which would result in a large quantity of casualties and property losses, and has absorbed high attention of the government. The precise prediction of gas emission is important for the mineral production safety in China. Gas emission prediction model based on small sample has always been a significant subject in coal mine gas study field [1].

This paper selects Kailuan mining group money mining camp in May 2007 to December 2008 working face of absolute gas emission quantity [23]. The main factors are shown in Table 1 such as coal seam gas content (), burying depth (), coal seam thickness (), coal seam dip angle (), mining height (), daily work progress (), working face length (), production rate (), adjacent layer gas content (), adjacent layer thickness (), adjacent layer spacing (), mining intensity (), interlayer lithology (), and gas emission quantity ().

In order to reduce the influence of the different dimension, the experimental data are conducted by generating a value between lower and upper limits of each factor by using the formula . is the experimental data as shown in Table 1. and describe the lower limit and upper limit of the experimental data, respectively. Let be the standardized data. Then this formula is used to restore data. In particular, we use the first 16 data for training and the remaining 4 for validation. Through some experiments, the topological structure of the classic ENN is optimal in the form of the 13-16-1. So the RMENN has the same topological structure to compare with ENN, and let . The training error is set to 0.01.

##### 5.2. Training Results

After 50 times independent simulations, we compare the training effects of the two models. From the point of view of training error traces, Figures 3(a)–3(c) show that the RMENN has stronger update power than the classic ENN. However, the classic ENN obviously lacks update power; even it is not sufficiently close to the target error as shown in Figure 3(c). It is because the fact that recursive parts of the classic ENN cannot be adjusted, but the recursive parts of RMENN can be adjusted and the learning rate can be dynamically adjusted to improve update power.

**(a) The best training error traces**

**(b) The average training error traces in 50 independent simulations**

**(c) The worst training error traces**

From the point of view learning speed, the RMENN has superior convergence speed than the classic ENN. When it averagely reach 348 epochs, the training error of the RMENN meets the requirement, but the classic ENN do not meet the requirements in the best training error traces as shown in Figure 3(a) (mean square error of the classic ENN is 0.010193).

Figure 4 shows the state of relative error distribution in the training process. The maximum relative error, the minimum relative error, and the average relative error of the RMENN are 10.04%, 0.87%, and 3.54%, respectively. However the maximum relative error, the minimum relative error, and the average relative error of the classic ENN are 15.09%, 2.14%, and 5.21%, respectively. It demonstrates that the RMENN has higher training accuracy than the classic ENN.

Figure 5 shows that the RMENN has superior average approximation effect than the classic ENN in 50 times independent simulations.

##### 5.3. Comparison of Model Prediction Ability

The mean squared error (MSE), median absolute error (MAE), and mean absolute percentage error (MAPE) are used as the indicators to measure the prediction precision. These indicators are defined as follows:where and denote the real and predicted values at time , respectively.

Table 2 shows the state of relative error distribution in the prediction process. The maximum relative error, the minimum relative error, and the average relative error of the RMENN are 5.48%, 0.32%, and 3.43%, respectively. However the maximum relative error, the minimum relative error, and the average relative error of the classic ENN are 9.12%, 2.48%, and 5.50%, respectively. The relative errors of the RMENN appear more smaller. The mean squared error (MSE), median absolute error (MAE), and mean absolute percentage error (MAPE) of the RMENN are 0.0620, 0.2181, and 3.43%, respectively. The mean squared error (MSE), median absolute error (MAE), and mean absolute percentage error (MAPE) of the classic ENN are 0.1280, 0.3385, and 5.50%, respectively. It demonstrates that the proposed RMENN model has the better performance as evaluated by MSE, MAE, and MAPE.

To comprehensively evaluate the performance and differences significant of the two prediction models, Diebold-Mariano (DM) test and three loss functions are adopted, including MSE, MAE, and MAPE. DM test is a comparison test that focuses on the predictive accuracy and can be used to evaluate the prediction performance of the proposed hybrid model and other comparing models. The details of DM test are given as follows:where is the loss function. and are the prediction errors from two models. is an estimator of the variance of . The hypothesis test is defined asthe null hypothesis is that the two models have the same accuracy. Under the null hypothesis, the test statistics DM are asymptotically distributed. If , the null hypothesis will be rejected, the two models are significantly different.

Table 3 shows that the DM value as evaluated by MSE is larger than the upper limit at the 5% significance level. The DM value as evaluated by MAE is larger than the upper limit at the 0.5% significance level. The DM value as evaluated by MAPE is larger than the upper limit at the 1% significance level.

In order to further verify the validity of the RMENN model, four sets of sample data are randomly selected for validation and other data are selected for training. Table 4 shows the error analysis of prediction results. The relative error of the ENN is larger than that of the RMENN except the tenth sample, and the errors of the ENN based on MSE, MAE, and MAPE are larger than that of the RMENN. Table 5 shows that the DM value as evaluated by MSE is not larger than the upper limit at the 10% significance level. The DM values as evaluated by MAE and MAPE are larger than the upper limits. Overall conclusions based on all the tests indicate that the RMENN model is significantly better than the ENN model.

#### 6. Conclusion

In this paper, we analyze the drawback of the classic ENN. A novel type of network architecture called RMENN has been proposed for gas emission prediction. In theory, the convergence and stability of learning algorithm of RMENN are proved, and the approximate optimal learning rate is given. In practice, experiment analysis results on the gas emission prediction have demonstrated that RMENN has better performance in convergence rate and prediction accuracy than the class ENN, at the cost of slightly heavier structure (correction factors). Therefore, the RMENN has certain application value and the prospect.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

#### Acknowledgments

This research was supported by Foundation of Liaoning Educational Committee (Grant no. LJ2017QL021) and the National Natural Science Foundation of China (Grant no. 61304173).