Abstract

An optimal prediction model for flow boiling heat transfer of refrigerant mixture R245fa inside horizontal smooth tubes is proposed based on the GRNN neural network. The main factors strongly affecting flow boiling such as mass flux rate (), heat flux (), quality of vapor-liquid mixture (), evaporation temperature (), and tube inner diameter () are used as the inputs of the model and the flow boiling heat transfer coefficient () as the output. Neural network model is used to optimize the prediction of flow boiling heat transfer coefficient of R245fa in horizontal light pipe through training and learning. The prediction results are in good agreement with the experimental results. For the network model of heat transfer, the average deviation is 7.59%, the absolute average deviation is 4.89%, and the root mean square deviation is 10.51%. The optimized prediction accuracy of flow boiling heat transfer coefficient is significantly improved compared with four frequently used conventional correlations. The simulation results reveal that the modeling method based on R245fa neural network is feasible to calculate the flow boiling heat transfer coefficient, and it may provide some guidelines for the optimization design of tube evaporators for R245fa.

1. Introduction

At present, in the study of the overall efficiency improvement of ORC, it is found that evaporator is a key part of organic Rankine cycle. In recent years, many researchers have been in the study of the improvement of the heat transfer efficiency of the evaporator. In the ORC system, the working medium is also a very important factor affecting the stable, safe, and efficient operation of the system [1]. R245fa is one of the ideal low-temperature waste heat powers generating organic Rankine’s cycle fluids [2]. Considering the safe operation and technical economy of R245fa organic Rankine’s circulatory system’s evaporator, it must be accurately grasped the fluid’s flow boiling heat transfer performance [3]. However, as the result of complex flow boiling heat transfer process and many influencing factors, there are strong coupling, uncertainty, and nonlinear characteristics among various factors [46]. It is difficult to obtain accurate model through the traditional modeling method. Most existing correlations are summarized from the experiment data of empirical and semiempirical correlation [7, 8]. Moreover, experiment research and correlation for R245fa flow boiling heat transfer are still very rare at present. The calculations associated with R245fa still have rather large errors.

Therefore, based on the R245fa, a GRNN network optimization model for R245fa flow boiling heat transfer in a saturated state horizontal smooth tubes is established to predict the flow boiling heat transfer performance of R245fa [9, 10] and compared with the results of traditional associated equations and then to optimize the traditional association.

The structure of this paper is as follows: Section 2 is the formula of the problem. Section 3 is the establishment of the flow boiling heat transfer neural network model. Section 4 is the optimization of the prediction results and analysis, and the conclusion is in Section 5.

2. Problem Formulation

The flow boiling heat transfer of a horizontal smooth tube is mainly focused on the influence of mass flux rate, heat flux, quality of vapor-liquid mixture, and pipe diameter on the boiling heat transfer coefficient [11] and the deduction and improvement of the correlation of heat transfer and pressure drop during the flow boiling process. Besides, as the fourth-generation refrigerant, R245fa also plays an important role in the study of flow boiling.

In this paper, R245fa is used as the working medium to establish an optimized prediction model of flow boiling heat transfer based on the GRNN neural network in horizontal smooth tube. In the GRNN neural network, mass flux rate, heat flux, quality of vapor-liquid mixture, evaporation temperature, and inner diameter of light tube are used as the network inputs, and the flow boiling heat transfer coefficient is used as the network output. And the model can be used to predict the boiling heat transfer coefficient of R245fa in horizontal smooth tube by learning. By comparing the results of four traditional associated equations, we can see which is closer to the experimental results. It is proved that the prediction model of flow boiling heat transfer of the GRNN neural network is more accurate than the four traditional associated equations and further illustrates that the model can achieve the optimization of the traditional formula.

3. The Establishment of a Neural Network Model for Flow Boiling Heat Transfer

The GRNN network training speed is fast, design is simple, and it is appropriate for nonlinear function’s approximation. It could deal with complex and highly nonlinear problems well. The GRNN network can obtain better learning effect under the circumstances of less sample data. However, the selection of the GRNN network training samples is very important for the construction of network; it must be representative, as it will directly determine the right value of network and affect the final prediction results [12].

3.1. Parameter Selection

ANOVA (analysis of variance) technique was used to select the significant parameter influencing boiling heat transfer coefficient. This paper studied the most influencing ambient parameter affecting boiling heat transfer coefficient. The ANOVA technique was used to study the effect of ambient parameters on boiling heat transfer coefficient. Here, four ambient parameters (atmospheric pressure, ambient temperature, ambient relative humidity, and ambient wind velocity) were considered with three levels (low, medium, and high conditions) having 34 factorial designs of 81 ambient conditions. The control factors (ambient parameters) considered for the ANOVA study are listed in Table 1. The minimum number of ambient conditions required for ANOVA is calculated using the following equation [13]:

Here, represents the minimum number of ambient data required for ANOVA, represents the number of ambient parameters, and represents the number of levels, and and . So, nine ambient conditions are simulated using the GRNN. The ambient parameters considered at three levels are listed in Table 2. The standard orthogonal array used for ANOVA is shown in Table 2. The GRNN-simulated boiling heat transfer coefficient as per the standard orthogonal array is shown in Table 3. The following equations are used to calculate the ANOVA parameters [13].

The sum of squares due to mean (SSm) is given by

Here, represents the minimum number of ambient data, and represents the average of sum of squares. The sum of squares due to each parameter SS is given by the following equation:

Here, represents the number of levels; is the parameter at different levels. The total sum of squares (TSS) is given by the following equation:

The degrees of freedom (DOF) is given by

The mean sum of squares (MSS) is given by

Pure sum of squares (PSS) is calculated by the following equation:

Percentage contribution (PC) is given by

Based on the above equations, the influence of ambient conditions on mass flux rate, heat flux, quality of vapor-liquid mixture, and evaporation temperature is calculated.

3.2. The GRNN Network Structure

Make the mass flux rate (), heat flux (), quality of vapor-liquid mixture (), and the evaporation temperature () as the GRNN neural network inputs. Work with the flow boiling heat transfer coefficient () as the output of the network to build the GRNN network. In this work, mass flux rate means the quality of fluid flowing across unit cross-sectional area per unit time. For heat flux firstly, heat that passes through a given area per unit time is heat transfer rate; hence, heat transfer rate per unit area is called the heat flux. Quality of vapor-liquid mixture means mass fraction of dry steam in wet steam per kilogram. Evaporation temperature means gas temperature during vaporization of fluids. The GRNN network structure includes four layers: input layer, model layer, add layer, and output layer. Among these layers, model layer and add layer constituted the intermediate network.

Flow boiling heat transfer GRNN model of network structure is shown in Figure 1 [14].

3.3. Data Collection, Training Samples, and Test Samples

The selection of sample data has important influence on the learning speed and generalization ability of neural network. The sample size is too small to make the network’s expression insufficient and it will reduce the network’s generalization ability. Meanwhile, the excess of the sample number may have the redundancy problem, increase the training time of the network, and may even appear “overfitting” phenomenon, which will lead to the network’s generalization ability. Therefore, the selection of sample data must be representative and comprehensive. At present, the research on R245fa is still prevalent in the thermodynamic cycle analysis of the experimental study, while it is rare on the flow boiling heat transfer and it is difficult to obtain the experimental data. Therefore, the data of R245a in this paper is based on the experimental study of the R245fa boiling heat transfer performance of Huang et al. [15].

After excluding the experimental data, 108 groups of valid data are obtained as the total sample. The condition scopes of sample data are as follows: evaporation temperature (40°C to 60°C), refrigerant mass flux rate (393.2~786.3 kg m−2 s−1), and refrigerant heat flux (208.3~1380.7 W m−2).

In order to establish the neural network model, the sample data is divided into two parts: the training set and testing set, which are based on the principle of ergodicity data grouping. Get access to every item of the total sample of 108 datasets. After the first time, all items should be divided into two parts: 54 groups of odd number items and 54 groups of even number items. Then, ongoing the second time, the latter is divided into 27 groups of odd number items and 27 groups of even number items. In order to cover all test conditions, the training samples take the odd number items in the first time and take the even number items in the second time. By far, the total sample data is divided into two parts: the former 81 groups as the training set and the remaining 27 groups as the test set. Part of experimental data is shown in Table 4.

3.4. The Preprocessing and Postprocessing of the Data

The big data is bound to the annihilation of the small role of neural networks for data, because of the disparity of the various components of the data; even some of them have different several orders of magnitude, so it is needed for data normalization; all the input data is converted to [0, 1], which can effectively reduce the input of data redundancy and speed up the training of the network, and through the data after normalization is used to train the neural network, it can improve the accuracy of the model.

Normalized formula is as follows:

In the formula, is the maximum value of experimental data; is the minimum value of experimental data; is the experimental data; is the normalized data of one of the posts.

To ensure the consistency of the format of input and output data, training and testing samples of neural networks need to be normalized pretreatment. The collected sample data normalized after treatment is shown in Table 5.

After the GRNN network model training is completed, the outputs of the network are normalized values, which need to postprocess, which is the say that the real output value is the antinormalized value, in order to easily and intuitively compare with the original experimental data.

Antinormalization formula is as follows:

3.5. The GRNN Neural Network Training and Learning

The GRNN neural network chose improved newgrnn of neural network toolbox () function in MATLAB R2008 to design. The invocation of the format for its function is as follows:

net = newgrnn (inputn_train, outputn_train, spread)

among,

inputn_train: training set of sample input;

outputn_train: training set of sample output;

spread: GRNN network smooth factor.

Considering that the sample data is still not enough, this paper tries to use the method of cross-validation (CV) to train the GRNN network and to find out the best spread. Under normal circumstances, the smaller the spread, the better the approximation of the network, but the approaching process is not smooth; conversely, the faster the spread, the more smooth the network approximation, but the error will be increased greatly. The spread can be an interval [0.1, 0.5] by experience during the GRNN network creation [16], increasing by 0.01 circuit training for optimization in order to achieve the best predictive effect after training the network. Results of the program after the operation show that the performance of the training data is optimum when the spread value was 0.14 [1719]. The process of heat transfer prediction of the GRNN network is shown in Figure 2 [20].

The average relative error between the simulation output value and the experimental value is 1.56%, and the maximum relative error is 10.17%, which shows that the GRNN network has a good learning on the internal relationship among the quality of the working medium flow rate, heat flux, quality of vapor-liquid mixture, and evaporation temperature which are represented by the GRNN network [21, 22].

4. The Prediction Results and Analysis

4.1. Comparing the Experimental Results with the Prediction Results

Figure 3 is the comparison of simulation results with the experimental results of the GRNN network model. The abscissa is the experimental results, and the ordinates are the simulation results of network model. The figure shows the anastomosis of network simulation results and experimental results. There is about 88% of the training data points fall in the range of ±10% error; the experimental data of the GRNN network prediction effect is better. This shows that the network has high accuracy and generalization. It is worth noting that although the predicted results and experimental results of individual data points have larger deviation, this basically because there are no enough training samples and data distribution. It can increase the training dataset further and use the optimization of the center of web , width , and link weight to solve.

4.2. Comparison of the Results of the GRNN Network Model and the Experimental Results

After the completion of the training of the GRNN network, the network will begin to predict. Figure 4 shows the predicted output curve of the GRNN network. In the figure, it can be seen that the predicted output of network and expected output can accord mostly, but it cannot completely reflect the change trend of experimental data point distribution in some places. After calculation, the average relative error between the output value of the GRNN network and the output value of the test set is 2.04%, and the maximum relative error is 13.58%. Although the prediction error of the RBF network in the last section is increased, the GRNN network model has better generalization after training. The GRNN network forecast curve is shown in Figure 4.

4.3. The Comparison of the Predicted Results and the Traditional Relational Calculation Results

In order to investigate the precision of the calculation results between the network model and the traditional relational, choose the typical correlations of the cooling flow boiling heat transfer. From Table 6, compared with the calculation accuracy of the Chen [23] correlations, Gungor and Winterton [24] correlations, Liu and Winterton [25] correlations, and Shah [26] correlations, the prediction accuracy of the GRNN network model has some extent improvement. It is proved that the model is suitable for the prediction of R245fa flow boiling heat transfer in a horizontal pipe and it can satisfy the precision requirement of engineering application.

4.4. The Influence of Input Parameter Analysis

For further investigating the prediction accuracy of the GRNN network prediction model’s calculation results and verifying the model prediction results in accordance with the experimental results along with the change of input parameters, it makes an impact analysis of the change of input parameters working on the prediction properties of the GRNN network.

Figure 5 shows when the nominal mass flux rate, evaporation temperature, boiling heat transfer coefficient are changing with dryness of the GRNN network and other four relational model calculation results compared with the experimental data. Experimental condition is .

From the distribution of the experimental data points of Figure 6, under the condition of the same mass flux rate, with the quality of vapor-liquid mixture increases in the low degree of dry area, the heat transfer coefficient of working medium increases. However, with the increase of quality of vapor-liquid mixture in the high degree of dry area, heat transfer coefficient trends reduce. The calculation results of the Chen correlations, Gungor and Winterton correlations, and Shah correlations have bigger difference with the experimental results. The prediction accuracy of Liu and Winterton correlations accords with the experimental values well. The predicted results of the GRNN network model accord with the tendency of distribution of the experimental data points more accurate. Meanwhile, it demonstrates that the GRNN network that accords with both is closer [27].

Figures 6 and 7 show that when the evaporation temperatures are 50°C and 60°C, respectively, transfer coefficients of the GRNN network model’s prediction results are changing with quality of vapor-liquid mixture and mass flux rate compared with the experimental data. From the experimental distribution data points from the picture, the heat transfer coefficient increases with the increasing of mass flux rate. Under the same mass flux rate condition, heat transfer coefficient increases with the increasing of dryness; it generally showed the tendency of increasing firstly and then decreasing [28]. It is important to emphasize that the prediction results of the GRNN network model can roughly reflect such change rule, but in some parts of the predicted data and experimental data points has a large deviation. This is mainly due to the current experiment data is not enough and the covering range of operation condition is not wide enough. It leads to the GRNN network training learning is not sufficient, so it is necessary to expand the experiment working range and database further.

5. Conclusion

(1)The aim at the pure substance R245fa to establish level light pipe flow boiling heat transfer of the GRNN network optimal prediction model is feasible; the network learning speed is fast and without people to determine the number of hidden layer neurons. It avoids analysis R245fa complex internal mechanism of the flow boiling heat transfer process. It cannot only improve the prediction accuracy but also reduce the cost of research effectively, reduce the experimental workload, and shorten the time [29](2)The GRNN network model’s optimal prediction result of average error (Bias) is 7.59%, the absolute error (AAD) is 4.89%, and the root mean square (RMS) error is 10.51%. It also has about 92% data point error within ±10%. By comparing with the calculation result of four common correlations, it shows that the optimal prediction precision is superior to the traditional associated equations. Along with quality of vapor-liquid mixture, mass flux rate and heat flux change trend predicted results can accord with the experimental result tendency(3)Using the neural network technique combined with experimental study, it can accurately predict the flow boiling heat transfer in the light pipe R245fa and reduce the experimental workload, and to use the optimization design of R245fa cooling system’s tube evaporator provides the beneficial reference. However, there is still a problem that needs to be studied further, such as experimental data source of R407C is not complete and the choice of network type and training algorithm needs to be improved and optimize the GRNN. It effectively improves the accuracy of R245fa flow boiling heat transfer optimal prediction

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Contract no. 51566005.