Abstract

The helium burning phase represents the second stage that the star used to consume nuclear fuel in its interior. In this stage, the three elements, carbon, oxygen, and neon, are synthesized. The present paper is twofold: firstly, it develops an analytical solution to the system of the conformable fractional differential equations of the helium burning network, where we used, for this purpose, the series expansion method and obtained recurrence relations for the product abundances, that is, helium, carbon, oxygen, and neon. Using four different initial abundances, we calculated 44 gas models covering the range of the fractional parameter with step . We found that the effects of the fractional parameter on the product abundances are small which coincides with the results obtained by a previous study. Secondly, we introduced the mathematical model of the neural network (NN) and developed a neural network algorithm to simulate the helium burning network using a feed-forward process. A comparison between the NN and the analytical models revealed very good agreement for all gas models. We found that NN could be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks.

1. Introduction

Nowadays, applications of fractional calculus in physics, astrophysics, and related science are widely used [1, 2]. Examples of the recent applications of the fractional calculus in physics are found in [3] in which the author has introduced a generalized fractional scale factor and a time-dependent Hubble parameter obeying an “Ornstein–Uhlenbeck-like fractional differential equation” which serves to describe the accelerated expansion of a nonsingular universe, in [4] in which the author extended the idea of fractional spin based on two-order fractional derivative operator and in [5] in which the author has generalized the fractional action integral by using the Saigo–Maeda fractional operators defined in terms of the Appell hypergeometric function.

In astrophysics, many problems have been handled using fractional models. Examples of these studies are in [6], where the author introduced an analytical solution to the fractional white dwarf equation, in [7] in which the authors analyzed the fractional incompressible gas spheres and in [8, 9] in which the authors introduced an analytical solution to the first and second types of Lane–Emden equation in the sense of modified Riemann–Liouville fractional derivative. Nouh in [10] solved the fractional helium burning network using a series expansion method. Abdel-Salam and Nouh [11] and Yousif et al. [12] introduced analytical solutions to the conformable polytropic and isothermal gas spheres. Simulation of ordinary (ODE) and partial differential equations (PDE) using an artificial neural network (ANN) gives very good accuracy when compared with both the numerical and analytical methods. Many authors dealt with this issue and developed many neural algorithms to solve ODE and PDE. Dissanayake and Phan-Thien [13] first introduced the concept of approximating the solutions of differential equations with neural networks, where training was carried out by minimizing losses based on the satisfaction of the network with the boundary conditions and the differential equations themselves. Lagaris et al. [14] demonstrated that the network shape could be selected by construction to satisfy boundary conditions and that automatic differentiation could be used to determine the derivatives that appear in the loss function. This approach has been extended to irregular boundary systems [15, 16], applied to the resolution of PDEs occurring in fluid mechanics [17], and software packages have been developed to facilitate their applications [1820]. Nouh et al. [21] and Azzam et al. [22] developed a neural network algorithm to solve the first and second types of Lane–Emden equations arising in astrophysics.

The helium burning stage (also known as the triple-alpha process) represents the second stage where the stars undergo the transfer of nuclear energy from the interior to their surface. In this stage, nuclear energy is almost converted to light when passing through the stellar atmosphere. Helium burning (HB) releases energy per unit fuel of about 6 × 1023 MeV/g ≈ 1018 erg/g. The reaction equations that govern the HB network may be written as follows [10]:where the conversion process from helium to carbon needs 108 K.

Clayton [23] set up a model for the helium burning process by taking into account the above reactions. If the number of atoms per unit of stellar material mass for helium, carbon, oxygen, and neon is represented by x, y, z, and r, respectively, then the next four equations (also maybe called the kinetic equations) control the time-dependent change in abundance:where a, b, and c are the reaction rates.

The system of equation (2) represents the integer version of the helium burning network and solved simultaneously by computational or analytical methods [2326]. Appendix A includes clarification of the derivation of the set of equation (2). The fractional kinetic equation (like helium burning network) has been solved by many authors. In terms of H-functions, [27] presented a solution to the fractional generalized kinetic equation. The generalized fractional kinetic equations have been solved by [28]. Chaurasia and Pandey [29] solved the fractional kinetic equations in a series form of the Lorenzo–Hartley function.

In the present article, we developed a neural network algorithm to solve the fractional system of differential equations describing the helium burning network. We use the principles of the conformable fractional derivative for the mathematical modeling of the ANN. We used in this research an architecture of ANN which is the feed-forward network having three layers and trained using the algorithm of backpropagation (BP) based on the gradient descent delta rule.

The analytical solution is developed using the series expansion method and a comparison between the ANN and analytical models is performed to declare the efficiency and applicability of the ANN for solving the conformable helium burning network. The paper is organized as follows: Section 2 introduces the details of the analytical solution of the conformable helium burning model using the series expansion method. Section 3 deals with the mathematical modeling of the neural network technique with its gradient computations and backpropagation training algorithm. Section 4 discusses the results obtained and the comparison between the ANN and analytical models. Section 5 gives the details of the conclusion.

2. Analytical Solution to the Conformable Helium Burning Model

By being certainly valid, the techniques of numerical integration may provide very accurate models. However, it is surely worthwhile to obtain modeling with the desired precision if complete analytical formulas are created. Besides, these analytical formulas usually provide much more deep insight into the essence of a model than numerical integration. The power series solution, on the other hand, may serve as the analytical representation of the solution in the absence of a closed analysis solution for a particular differential equation.

The fractional form of equation (2) is given by [10]

If , then x, y, z, r could be represented bywhere are constants to be determined.

In equation (2), the left side of the system depicts the abundances of those elements in which the abundance of helium (x) is raised to power 3. To obtain the fractional derivative of , we apply the fractional derivative of the product of the two functions. Using the series expansion method, we obtained the recurrence relation of the term by the following.

Let , with .

That is,

Performing the fractional derivative to equation (5) k times, we getand putting , we get

Sincewe have

After some manipulations, we getand putting and in equation (10), we have

If , then

Adding the zero value to the second summation of the last equation, we get

From the last equation, we can write the coefficients as

Putting in equation (14), we havewheretaking fractional differentiation -derivatives to equation (4), we getand inserting equations (4) and (17) into equation (3), the series coefficients , and could be obtained from

The recurrence relations corresponding to the integer model could be obtained by putting in the last four formulas of equation (18) [26].

At n=0 and with the initial values of the chemical composition , where are arbitrary constants, we getat n = 1, we getand by applying the same scheme, we can determine the rest of the series terms. So, the product abundance could be represented by the series solution of equation (3) as

It is important to mention that , , , and are arbitrary initial values that enable us to compute gas models with different chemical compositions, that is, pure helium or rich helium models.

3. Neural Network Algorithm

3.1. Mathematical Modeling of the Problem

To simulate the conformable fractional helium burning network represented by equation (3), we use the neural network architecture shown in Figure 1.

Considering the initial conditions , the neural network could be obtained following the next steps [30].

The form of the neural approximate solution of equation (3) will have two terms: the first represents the initial values and the second represents the feed-forward neural network, where x is the input vector and p is the corresponding vector of adjustable weight parameters. Then, the output of the neural network is written as

Then, neural network output is given bywhere , is the weight from the input unit j to the hidden unit i, is the weight from the hidden unit i to the output, represents the bias of the ith hidden unit, and is the sigmoid activation function that has the form , , , and . By differentiating the networks output N with respect to the vector , we get

Differentiating equation (24) n times gives

As a result, the solution of the helium burning network is given aswhich fulfills the initial conditions as

3.2. Gradient Computations and Parameter Updating

Using equation (27) to update the network parameters and computing the gradient, the error quantity needed to be minimized is given bywherewhere is given by equation (25). So, the problem is converted into an unconstrained optimization problem.

To update the network parameters, we train the neural network for the optimized parameter values. After the training process, we obtained the network parameters and computed the following:

Now, N with one hidden layer is analogous to the conformable fractional derivative. By replacing the hidden unit transfer function with the nth order fractional derivative, the fractional N gradient differentiating with respect to , , and could be written as

The network parameters updating rule can be given aswhere are learning rates, and .

In the stellar helium burning model based on ANN, the neuron is the fundamental processing unit that can process a local memory and carry out localised information. At each neuron, the net input (z) is calculated by supplementing the received weights to obtain an aggregate weight of those inputs and add it to a bias (). The net input is then passed by a nonlinear activation function, which results in the neuron output (as seen in Figure 1) [31].

3.3. Training of BP Algorithm

The backpropagation (BP) training algorithm is a gradient algorithm aimed to minimize the average square error between the desired output and the actual output of a feed-forward network.

It requires continuously differentiable nonlinearity. Figure 2 displays a flow chart of a backpropagation offline learning algorithm [32].

The algorithm is a recursive algorithm that starts at the output units, working back to the first hidden layer. A comparison of the output at the output layer with the desired outputs is performed using an error function which has the following form:

For the hidden layer, the error function takes the form:where is the error term of the output layer, and is the weight between the output and hidden layers. The update of the weight of each connection is implemented by replicating the error in a backward direction from the output layer to the input layer as follows:

The value of learning rate is chosen such that it is neither too large leading to overshooting nor very small leading to a slow convergence rate. The value of the momentum term found in the last part in equation (36) which is affixed with a constant (momentum) is used to accelerate the error convergence of the backpropagation learning algorithm and also to assist in pushing the changes of the energy function over local increases and boosting the weights in the direction of the overall downhill [33]. This term adds a fraction of the most recent weight values to the current weight values. Both η and terms are set at the start of the training phase and determine the network speed and stability [31, 34].

The process is repeated for each input pattern until the output error of the network is decreased to a prespecified threshold value. The final weight values are frozen and utilized to get the precise product abundances during the test session. The quality and success of training of ANN are assessed by calculating the error for the whole batch of training patterns using the normalized RMS error that is defined aswhere J is the number of output units; P is the number of training patterns; , , , and are the desired outputs at unit j, whereas , , , and are the actual outputs at the same unit j. A zero error denotes that all the output patterns computed by the stellar helium burning model match the expected values perfectly and that the stellar helium burning model is fully trained. Similarly, internal unit thresholds are adjusted by supposing they are connection weights on links from the input with an auxiliary constant value. The previous algorithm has been programmed using C++ programming language running on Windows 7 of a CORE i7 PC.

4. Results and Discussion

4.1. Data Preparation

Based on the recurrence relations (equation (18)), we computed one pure helium gas model, X0 = 1, Y0 = 0, Z0 = 0, R0 = 0, and three rich helium gas models, X0 = 0.95, Y0 = 0.05, Z0 = 0, R0 = 0; X0 = 0.9, Y0 = 0.1, Z0 = 0, R0 = 0; and X0 = 0.85, Y0 = 0.15, Z0 = 0, R0 = 0. The fractional parameter covers the range with a step of 0.05. The calculations are performed for a time . Consequently, we have a total sum of 44 fractional helium burning models.

Figure 3 plots the two product abundances from gas models calculated at , where the solid lines are for the pure helium model with initial abundance X0 = 1, Y0 = 0, Z0 = 0, R0 = 0; and the dashed lines are for the rich helium model with initial abundances, X0 = 0.95, Y0 = 0.05, Z0 = 0, R0 = 0. The effects of changing the composition of the gas are remarkable, especially for the carbon C12.

In Figure 4, we illustrated the effects of changing the fractional parameters on the product abundances calculated for a gas model with initial abundance X0 = 0.85, Y0 = 0.15, Z0 = 0, R0 = 0. It is clear that the effects of the change of the fractional parameter on the behavior of the product abundances are small. This result is similar to the results obtained by [10] for the models computed in the sense of the modified Riemann–Liouville fractional derivative.

4.2. ANN Training

For the training of ANN used to simulate the helium burning network, we used part of the data calculated in the previous subsection. The data used for training of the ANN are as shown in the second column of Table 1.

The neural network (NN) architecture used in this paper for the helium burning network has three layers as shown in Figure 1. These layers are the input layer, hidden layer, and output layer. Different configurations of hidden neurons of 10, 20, and 40 have been tested, where we concluded that 20 neurons in a single hidden layer are giving the best model of the network to simulate the helium burning network. This number of neurons in the hidden layer was found to give the minimum value of RMS error of 0.000005 in an almost similar number of training iterations. As a result, the configuration of the NN we used was 4-20-4, where the input layer has four inputs which are the fractional parameter α, the time t (t takes values from 3 to 2100 in steps of 3 seconds), two of the initial abundances which are the helium (X0), and carbon (Y0). We excluded the other two initial abundances (Z0 and R0) because their values are always zero as indicated in Table 1. The output layer has 4 nodes which are the time-dependent product abundances for helium (X), carbon (Y), oxygen (Z), and neon (R).

During the training of the NN, we used a value for the learning rate (η = 0.035) and for the momentum (). Those values for η and were proved to quicken the convergence of the backpropagation training algorithm without exceeding the solution. For the demonstration of the convergence and stability of the values computed for weight parameters of network layers, the behaviors of the convergence of the input layer weights, bias, and output layer weights (, βi, and νi) for the helium burning network are displayed in Figure 5. As these figures show, the weight values are initialized to random values and after somewhat considerable iterations they converged to stable values.

4.3. Comparison between the NN Model and Analytical Model

After the end of the training phase of NN, we used the final frozen weight values in the test phase to predict the time-dependent product abundances for helium (X), carbon (Y), oxygen (Z), and neon (R). In this test phase, we used values for a fractional parameter α not being used in the training phase to predict the helium burning network model. These values are shown in the third column of Table 1. The results of the predicted values show very good agreement with the analytical values for different helium modes. A comparison between the predicted NN model values and analytical model for two values of the fractional parameters (α = 0.55 and α = 0.95) along with different helium modes shown in Table 1 are displayed in the range of figures , that is, Figures 69 for one pure helium gas model, X0 = 1, Y0 = 0, Z0 = 0, R0 = 0, and three rich helium gas models, X0 = 0.95, Y0 = 0.05, Z0 = 0, R0 = 0; X0 = 0.9, Y0 = 0.1, Z0 = 0, R0 = 0; and X0 = 0.85, Y0 = 0.15, Z0 = 0, R0 = 0. In all these figures, the very good agreement between both the NN model and analytical model is clear, which elects the NN to be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks.

From the performed calculations, one can examine the effect of changing the fraction parameter over four elements. Figures 69 illustrate the fractional product abundances of He4, C12, O16, and Ne20 as a function of time, where some features could be obtained. For all gas models, the difference between the abundances of He4 computed for the two values of the fractional parameters (α = 0.55, 0.95) is very small when the time seconds, after that the difference becomes larger. Also, it is noticed clearly that the abundance of C12 has the same behavior.

The behaviors of the fractional product abundances of O16 and Ne20 are different from those of He4 and C12. The differences between the fractional product abundances of O16 are large after just the beginning of the ignition, whereas the differences between the fractional product abundances of Ne20 are very small for seconds and increase after that time.

5. Conclusion

In the current research, we introduced an analytical solution to the conformable fractional helium burning network via a series expansion method where we obtained the product abundances of the syntheses elements as a function of time. The calculations are performed for the four different initial abundances: (X0 = 1, Y0 = 0, Z0 = 0, R0 = 0), (X0 = 0.95, Y0 = 0.05, Z0 = 0, R0 = 0); (X0 = 0.9, Y0 = 0.1, Z0 = 0, R0 = 0) and (X0 = 0.85, Y0 = 0.15, Z0 = 0, R0 = 0). The results of the analytical solution revealed that the conformable models have the same behaviors as the fractional models computed using the modified Riemann–Liouville fractional derivative. Second, we used the NN in its feed-forward type to simulate the system of the differential equations of the HB. To do that, we performed the mathematical modeling of a NN to simulate the conformable helium burning network. We trained the NN using the backpropagation delta rule algorithm and used training data for models with the fractional parameter range with step . We predicted the fractional models for the range with step . The comparison with the analytical solutions gives a very good agreement for most cases, a small difference obtained for the model with fractional parameters . The results obtained in this research prove that modeling of nuclear burning networks using NN gives very good results and validates the NN to be an accurate, robust, and trustworthy method to solve and model similar networks and could be applied to other nuclear stellar burning networks comprised of conformable fractional differential equations.

Appendix

A. The Helium Burning Network

Kinetic equations governing the change in the number density of species i over time are describing the nucleosynthesis of the elements in stars [35]:where for the interaction involving species m and n constitutes the reaction cross section and all the reactions producing or destroying species i shall be summed up. The number density of the species i is expressed by its abundance by the relationwhere is Avogadro’s number and is the mass of i in mass units.

The reaction rate is given by

For the helium burning reactions, the rates of the three reactions in the units of s−1 could be written as [23]

Now, by putting for the helium, carbon, oxygen, and neon abundances in number density (in units of cm−3), respectively, and implementing equations (A.2)–(A.4), the abundance differential equation (Equation (A.1)) could be written as

Using equations (A.4) and (A.5) could be written aswhere the abundances , and are expressed in number instead of number density. By replacing the reaction rates , and in equation (A.6) by , and , respectively, we obtained equation (2).

Data Availability

The Excel data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors acknowledge the Academy of Scientific Research and Technology (ASRT), Egypt (Grant no. 6413), under the project Science Up. ASRT is the second affiliation of this research.