Abstract

With the serious pollution of the ecological environment, there are a large number of harmful gases in the chemical gases emitted by the industry. Relevant intelligent chemical algorithms control the emission of chemical gases, which can effectively reduce emissions and predict emissions more accurately. This paper proposes a gray wolf optimization algorithm based on chaotic search strategy combined with extreme learning machine to predict chemical emission gases, taking a 330 MW pulverized coal-fired boiler as a test object and establishing chemical emissions of CNGWO-ELM. The prediction model, by using the relevant data collected by DCS as training samples and test samples, trains and tests the model. Simulation experiments show that the chemical emission prediction model of CNGWO-ELM has better accuracy and stronger generalization ability, with higher practical value.

1. Introduction

In recent years, with the release of chemical gases, environmental pollution problems have become increasingly serious [1, 2]. In order to effectively control environmental pollution, it is necessary to monitor the degree of environmental pollution in a timely manner and analyze the composition of environmental pollutants in order to better solve environmental pollution problems. More and more experts and scholars have found that analytical chemistry is an important way to effectively monitor the environment and is of great significance to environmental protection.

With the continuous advancement of industrialization, the dependence of economic and social development on energy will be further increased. Strengthening the alternative strategic research on fossil energy such as coal, oil, and natural gas is a necessary measure to solve energy supply shortage and promote economic development and environmental friendliness. Among the chemical emissions emitted from the ecological environment, circulating fluidized bed (CFB) combustion is one of the main coal combustion methods in China. It has the advantages of wide fuel application range, good load regulation performance, low pollutant discharge, and easy utilization of ash [24]. The combination of CFB combustion mode and ultra-supercritical parameter technology will be the inevitable development direction of CFB boilers in the future. The original NOX emission concentration of conventional CFB boilers is between 100 and 300 mg/Nm3 [5, 6], which cannot meet the national standard limit of NOX emission concentration below 100 mg/Nm3, and the NOX emission concentration in some areas is lower than 50 mg/Nm3. The ultralow emission standards, CFB boilers face the problem of having to further reduce NOx emissions.

Many scholars at home and abroad have devoted themselves to the study of optimizing combustion conditions to control NOx formation. Rajan and Wen [7] first established a comprehensive model of fluidized bed coal combustion chamber (FBC) simulation. The model can predict combustion efficiency, particle size distribution of coke and limestone, slag discharge rate of bed material, bed temperature change, and the distribution of SO2, NOx, O2, CO, CO2, and volatile matter along the furnace height. The factors influencing NOx emission from the CFB boiler are combustion temperature and uniformity, excess air coefficient, staged combustion, and so on [8, 9]. In addition, the study of deep reduction of NOx in the CFB boiler with the selective noncatalytic reduction (SNCR) technology as the mainstream is also affected by reductant type, reaction temperature, ammonia-nitrogen ratio, and other factors [1012]. Edelman et al. added the dynamic mathematical model of a steam-water system to the overall mathematical model of a circulating fluidized bed boiler on the basis of the Wei model [13] and Muir model [14]. The dynamic models of combustion chamber temperature, heat transfer rate of heat exchanger, and oxygen content in flue gas in circulating fluidized bed (CFB) were established, and the dynamic predictions were made.

The structure of this paper is as follows. Firstly, the basic algorithm of CWO is explained and a chaotic nonlinear grey wolf optimization algorithm is proposed. Secondly, an ELM optimization model is proposed. Finally, the CNGWO-ELM algorithm proposed in this paper is tested to achieve the prediction effect and the relevant evaluation indicators are given.

2. Standard Gray Wolf Optimization Algorithm

The grey wolf optimization algorithm (GWO) is a group intelligence algorithm proposed by Mirjalili et al. in 2014 for the inspired grey wolf predation behavior [15]. The gray wolf group has a 4-layer hierarchical mechanism of , , , and . Among them, wolf is the leader with the best fitness in the grey wolf group; and are the two individuals with the second best fitness, and their task is to assist the wolf in the management and hunting of the wolves; is the remaining common wolves. The predation process is described as follows: first, the wolf leads the gray wolf group to search, track, and approach the prey; then, the and wolves attack the prey under the command of the wolf and summon the ordinary wolf to attack the prey until the prey is captured. The GWO algorithm completes the predation behavior by simulating predation behaviors such as gray wolf enveloping, hunt, and attack, thus achieving a global optimization process.

Assume that, in the dimensional space, the gray wolf group consists of N gray wolves. The GWO algorithm is described as follows.

Surrounding stage: after the wolves determine the position of the prey, they first surround the prey. The mathematical description is as follows:where D is the distance between the grey wolf and the prey, is the position of the prey after the t-th iteration (current optimal solution), is the position of the grey wolf after the t-th iteration (feasible solution), and A and C are coefficient factors, defined as follows:where and are random numbers in and is a convergence factor, which decreases linearly from 2 to 0 as the number of iterations increases.

Hunting phase: after the encirclement phase is completed, wolf leads and wolves to hunt down the prey. During the hunt, the individual positions of the wolves move as the prey escapes:where represent the current position of , , and wolves, represents the current gray wolf position, and , and are random vectors.

Update the location of wolf as follows:

2.1. Chaotic Nonlinear Grey Wolf Optimization Algorithm

The literature [16] pointed out that the optimization process of the GWO algorithm is essentially dominated by three optimal solutions, , , and wolves, which easily cause the algorithm to prematurely converge and fall into local optimum. Chaos is a kind of nonlinear linearity with phase space ergodicity and inherent randomness. Combining chaotic variables for optimal search can effectively jump out of local optimum and achieve global optimization. The literature [17] pointed out that Kent chaotic maps have better performance than logistic chaotic maps. Therefore, the introduction of Kent chaos optimization strategy in the basic GWO algorithm to optimize the solution that falls into the local optimum will effectively help the algorithm find a better solution. In addition, the introduction of nonlinear dynamic weighting strategy in the GWO algorithm will effectively balance the development and exploration capabilities of the algorithm and further improve the global optimization performance of the GWO algorithm.

2.1.1. Kent Chaotic Search Strategy

The Kent chaotic map model is described as follows:where the control parameter and the Lyapunov exponent of the Kent map is greater than 0 which is in a chaotic state. In this paper, the probability density function obeys a uniform distribution in (0, 1), that is, . The Lyapunov exponent can be used to characterize the divergence ratio of the initial state of small uncertainty. The Lyapunov exponent of Kent chaos is 0.696, which is greater than the classical logistic of 0.693.

In the chaotic search process, the ergodicity of the chaotic motion is used to generate the chaotic series based on the solution of the current search stagnation. The optimal solution in the sequence is taken as the global optimal solution, which makes it jump out of the local optimum. In the GWO algorithm, it is assumed that the solution does not improve significantly after continuous limit iterative search, indicating that the solution falls into local optimum, so Kent is used to optimize chaos. Chaos optimization is performed on , , and wolves of the GWO algorithm. The solution space of the optimization problem is . The Kent chaos optimization steps are as follows:Step 1: use equation (3) to map into the domain of the Kent:Step 2: generate chaotic sequences. Iteratively generate chaotic variable sequences by Kent equation .Step 3: using the carrier operation, is first amplified and then loaded onto the gray wolf individual to be searched so that the new gray wolf individual position in the field of the original solution space after the chaotic operator operation is obtained from formula (7), where :Step 4: calculate the fitness value of and compare it with the fitness value of X to retain the best solution.

2.1.2. Nonlinear Dynamic Weights

For the GWO algorithm, global exploration capabilities mean detecting a wider range of search areas, while local development emphasizes the use of existing information to perform detailed searches on certain areas of the group. There is no doubt that how the GWO algorithm seeks the balance between global exploration and local development is the key to ensuring the global search performance of the algorithm. In the GWO algorithm, A adjusts the balance between global exploration and local development. From equation (6), it can be found that the value of A changes with the change of control parameter . Therefore, the control parameters largely determine the global balance between exploration and local development.

In the standard GWO algorithm, the control parameter A decreases linearly from 2 to 0 as the number of iterations increases. However, this linearly decreasing strategy cannot fully reflect the actual complex optimization process of the algorithm. The nonlinear control parameters obtained better performance than the linear decreasing strategy. Based on this, the following nonlinear exponential decreasing strategy is proposed:where , , and t = 0 and the weight in equation (8) is . When , converges to 0.01. In the initial stage, has a large weight and weight decreases rapidly with the increase of the number of iterations. In the latter part of the iteration, the descending speed gradually slows down, compared with the linear decreasing adjustment scheme, the weighting strategy of the nonlinear exponential decreasing. It can improve the optimization performance of the GWO algorithm.

2.2. CNGWO Algorithm Steps

The following are the basic steps of the CNGWO algorithm, as shown in Algorithm 1.

Step 1: algorithm parameter setting: gray wolf group size N, maximum iteration number , iteration number t.
Step 2: randomly initialize the population and order.
While () do
 Calculate the fitness value of the gray wolf group, update the , , wolf according to the fitness value, and record the positions , , and .
 For i = 1 to N do
  Calculate the value of the control parameter Aaccording to equation (7).
  Update the values of parameters A, C according to equation (2).
  Update the position of the remaining wolves according to equations (3) and (4).
  Update the position of , , wolf.
  According to Kent chaotic search strategy.
 End for
t = t + 1
End while

3. Extreme Learning Machine Optimization Model

3.1. Fundamental Principles of Extreme Learning Machine (ELM)

ELM is a new single hidden layer forward neural network learning algorithm that has received extensive attention in recent years. The difference from traditional neural network training is that the ELM hidden layer does not need to be iterated and the input weight and hidden layer node offset are randomly selected. With the minimum training error as the goal, the hidden layer output weight is finally determined. The algorithm is described as follows.

Let m, M, and n be the number of nodes in the network input layer, hidden layer, and output layer, respectively, is the activation function of the hidden layer neurons, and is the threshold. Let N samples be , where is the network input vector and is the target output vector.

The ELM model is described as follows:where represents the input weight vector connecting the network input layer node and the first hidden layer node, represents the output weight vector connecting the first hidden layer node and the network output layer node, and represents the network output value.

contains the network input weight and the hidden layer node threshold. ELM’s training goal is to find the optimal . can be further described as follows:where represents the hidden layer output matrix of the network with respect to the sample, represents the output weight matrix, and represents the target value matrix of the sample set. are defined as follows:

The ELM network training process can be reduced to a nonlinear optimization problem. When the activation function is infinitely divisible, the network input weight and the threshold can be randomly assigned. At this time, the matrix is a constant matrix and the learning process of the extreme learning machine can be equivalent to obtaining the linear system . The least squares solution of the minimum norm is calculated as follows:where is the Moore–Penrose generalized inverse of the hidden layer output matrix ; after is solved, ELM’s network training process is completed. The implementation steps of the ELM algorithm are as follows:Step 1: given a training set , the activation function , and the number of hidden layer nodes , randomly generate the input weight and the threshold Step 2: calculate the hidden layer output matrix Step 3: calculate the output weight from formula (12)

3.2. ELM Optimization Model

Although the ELM learning algorithm has certain advantages in the computational performance and accuracy of the regression problem, the ELM lacks the a priori knowledge to randomly determine the input weight and the hidden layer threshold and obtain the output weight of the network. If the input weight and hidden layer threshold are not properly selected, it will affect the prediction accuracy and generalization ability of the ELM. Aiming at this problem, the CNGWO algorithm is used to optimize the extreme learning machine prediction model (CNGWO-ELM). The core idea is to use the sample data as the input of ELM and search and adjust the CNGWO optimization algorithm to get the best input weight and hidden layer node threshold. The regression effect of the ELM algorithm is best when the hidden layer nodes are as small as possible, and the output weight is obtained by parsing the MP generalized inverse. Figure 1 depicts the process by which CNGWO optimizes ELM model parameters.

The specific steps of CNGWO to optimize ELM model parameters are as follows:Step 1: population initialization: randomly generate a population consisting of N individuals, each consisting of input weights and thresholds, encoded according to .Step 2: variable selection and data acquisition: when modeling gas emissions, select reasonable input and output modes, collect and process operational data related to modeling from the combustion system, and divide into training data sets and test data sets.Step 3: determine the fitness function J defined as follows:where is the target output vector and is the network predicted output value.Step 4: model selection: the random initialization method generates the initial population, establishes the gas emission prediction model according to the initial population , and calculates the fitness value. If the fitness value does not meet the requirements, the CNGWO algorithm is used to optimize the model parameters of the ELM until satisfactory. As a result, the CNGWO-ELM model was established.Step 5: model validation: validate model performance using test data.

4. Experimental Comparative

4.1. Experimental Index

The CFB boiler adopts a single furnace, single air distribution plate, M-type arrangement structure, and circulating fluidized bed combustion mode. The boiler consists of 1 furnace, 4 steam-cooled cyclones, 4 return valves, 4 external heat exchangers, 8 slag coolers, and 2 rotary air preheaters. The tail is double flue. The preheater adopts the baffle to adjust the temperature, and the mechanical feeding mode of the starting bed material adding system is shown in Table 1.

The CFB boiler burns coal blended with coal slime, vermiculite, and terminal coal. The mixing ratio of designed coal slime, vermiculite, and terminal coal is 55 : 20 : 25; the ratio of coal mine slime, vermiculite, and end coal is 35 : 35 : 30. The specific coal quality information is shown in Table 2.

The boiler design has low nitrogen content in the coal quality, which reduces the formation of fuel-type NOX, but its high volatile content is not conducive to controlling NOX emissions. Data modeling and the modeling method proposed above are used to establish a CNGWO-ELM-based NOx emission prediction model and a boiler load prediction model. Among them, the boiler load prediction model includes 9 input characteristics, corrected total fuel quantity, feed water flow rate, A coal mill inlet air volume, B coal mill inlet air volume, D coal mill inlet air volume, E coal mill inlet air volume, the total primary air volume, the total secondary air volume, and the boiler load per unit time before the measurement time, and the boiler load as the output. The NOx emission prediction model includes 16 input characteristics, corrected total fuel quantity, main feed water flow rate, A coal mill inlet air volume, B coal mill inlet air volume, D coal mill inlet air volume, E coal mill inlet air volume, total primary air volume, total secondary air volume, furnace pressure, A coal mill inlet primary air temperature, B coal mill inlet primary air temperature, D coal mill inlet primary air temperature, E coal mill inlet primary air temperature, front wall outlet flue gas oxygen content, rear wall outlet flue gas oxygen content, and measurement time. The experimental data are the output values of each NOx emission index, as shown in Table 3.

4.2. Comparison of Model Prediction Control Results

The comparison results of the CNGWO-ELM predictive control and the widely used actual values of the power plant proposed in this paper are shown in Figures 2 and 3. It can be seen from Figure 2 that there is a high degree of consistency between the load predicted value and the actual load value and the accuracy of the data higher. In Figure 3, the chemical gas emission prediction model proposed in this paper compares the actual emission with the predicted value, which shows that the effect of the gas emission prediction model is better and the error between the actual value and the predicted value is small. At the same time, the model’s generalization ability test results show that the maximum relative error of chemical gas emissions is 3.56%, indicating that the model has strong generalization ability.

In order to better understand the algorithm applied in this paper, other methods are used to compare the predicted values. Figure 4 shows the prediction results of 50 models with heat consumption rate of 3 models. It can be seen that the CNGWO-ELM model can predict the test samples well. Compared with the other two models, the prediction accuracy is higher, indicating that CNGWO-ELM model has strong generalization ability.

4.3. Performance Comparison

In order to facilitate the evaluation of the performance of the model, this paper defines root mean squared errors (RMSE), mean relative error (MRE), and decision coefficient R2 as follows:where n is the number of samples, is the actual measured value, is the corresponding predicted value, and is the average of the actual measured values.

It can be seen that the predicted value and the true value are roughly distributed in the CNGWO-ELM model, which is relatively close to the one mentioned in the paper, indicating that the model can better predict the chemical gas. The effect is shown in Table 4.

The CNGWO-ELM model has smaller RMSE and MRE for the training samples and the least error for the test samples, which indicates that the generalization ability of the CNGWO-ELM model will be better when the input variables are larger. As the sample size changes, the values of RMSM, MRE, and R2 also have related changes, and the values also change well. In the optimization process of the ELM algorithm, the relevant data can be optimized and analyzed.

5. Conclusions

The characteristics of chemical gas emissions are affected by many factors, and the influence relationship is complex. In order to accurately predict the chemical gas emissions, a prediction model based on improved standard gray wolf optimization algorithm (GWO) and extreme learning machine (ELM) is proposed. CNGWO-ELM is used to preselect ELM model parameters to improve the accuracy and generalization capabilities of the predictive model. Taking a 330 MW pulverized coal-fired boiler as a test object and establishing a predictive model of chemical emissions of CNGWO-ELM, the model was trained and tested by using the relevant data collected by DCS as training samples and test samples. Simulation experiments show that CNGWO-ELM’s chemical emission prediction model has good accuracy and strong generalization ability and has higher practical value. In the future, other optimization algorithms will be introduced to achieve fast and accurate prediction and improve the global optimization effect.

Data Availability

The data used in this article are available in https://pan.baidu.com/s/1YHr7hRz25evFtIB1iNIpYw. Download code: dju4.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All the authors contributed equally to the writing of this paper and read and approved the final manuscript.

Acknowledgments

This work was supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant no. KJZD-K201902101) and the Open Fund of Chongqing Key Laboratory of Spatial Data Mining and Big Data Integration for Ecology and Environment.