Abstract

The discussion models the IRI-2012 TEC map over a moderate geomagnetic storm period (5 days) in February 2015 and compares the yield of the models. The models are constructed with the help of cubic Bézier curves and machine learning. In a sense, the comparison of a classical and mechanical approach with a modern and computer-based one is a considerable experience for the paper. The parametric curve approach governs models of piecewise continuous Bézier cubics, while the models employ only the TEC map. The design is separated into curve components at every five-hour curvature point, and each component is handled independently. Instead of the traditional least squares method for finding control points of cubics, it utilizes the mean of every five-hour of the piecewise curves of the TEC data. Accordingly, the prediction error can be controlled at a rate that can compete with the modern network approach. In the network model, 120 hours of the solar wind parameters and the TEC map of the storm are processed. The reliability of the network model is assessed by the (R) correlation coefficient and mean square error. In modeling the TEC map with the classical approach, the mean absolute error is 0.0901% and the correlation coefficient (R) score is 99.9%. The R score of the network model is 99.6%, and the mean square error is 0.71958 (TECU) (at epoch 47). The results agree with the literature.

1. Introduction

One of the layers of the upper atmosphere is the ionosphere which is ionized by cosmic-solar radiation. It covers about 60 to 700 km above the Earth’s external sheet. This sheet is critical for different forms of signal transfer, as it reflects and refracts radio waves, allowing long-distance communication. Radio waves are electromagnetic waves that occur with radio frequency. They permit data to be carried in the atmosphere without utilizing tangible links such as wires. The D layer (about 60–90 km), the E layer (90–150 km), and the F layer (150–500 km) are ionosphere layers. The F layer is composed of two layers named the F1 (150–200 km) and F2 (200–500 km) layers. Besides cosmic rays from space, X-ray and ultraviolet radiation from the Sun ionize atoms and molecules in the ionosphere [16] by causing free electrons and ions. The movement of the sun reasons regular daily changes such as maximum ionization during the daytime and reduced ionization at night in the ionosphere. The ionosphere’s ionization levels vary with the seasons, influenced by changes in solar radiation angles and Earth’s magnetic field. Ionization levels in the ionosphere vary with the seasons due to fluctuations in the magnetic field of the Earth and differences in the angle of incidence of sunlight. Solar activities, a geomagnetic storm (GS), and space weather conditions also affect the ionosphere. Solar activity refers to different events and procedures happening on the Sun, which can have substantial impacts on Earth and space. It follows an approximately 11-year cycle known as the solar cycle or sunspot cycle. The solar wind is an electromagnetic stream of charged particles constantly emitted by the Sun, and it interacts with Earth’s magnetosphere-ionosphere coupling. A GS [714] are disturbances in the Earth’s electromagnetism caused by changes in the solar wind, which have potentially significant impacts on the electromagnetic stability of the atmosphere [15, 16]. The GS commences with a coronal mass ejection (CME). The CME resembles a spray blasted with very high pressure. Charged massive energy-dense plasma bursts from the Sun to the Earth with the CME. These solar wind particles originating from coronal holes on the Sun’s surface interfere with Earth’s regular electromagnetic oscillation. These effects are an impact from space toward the Earth. Finally, the ionosphere is affected by volcanic movements and anthropogenic effects. These two types of effects are observed from the Earth’s crust to the atmosphere [17].

One of the most reliable tools (parameters) that allow the observation of the above-mentioned regular or irregular changes is the TEC map [2, 9, 14, 1825]. The TEC is a measure of the whole amount of free electrons present in a cross-sectional field of the Earth’s ionosphere between a satellite and a receiver on the ground. It is a vital parameter in ionospheric research and is commonly employed in navigation systems, space weather monitoring, and atmospheric science [12, 13, 2629]. The TEC (TECU) map, where 1 TECU corresponds to 1016 electrons per square meter [2], symbolizes the integrated electron density along the cross-section [30] between a transmitting satellite and a receiving ground-based receiver. The GNSS, IRI, … etc. data providers ensure TEC data collected from ground-based receivers globally. The signals from these satellites are affected by the ionosphere, and by analyzing the delays in the received signals, TEC can be estimated.

This essay models, estimates, and compares the TEC map (from the IRI-2012) of a moderate (Dst = −55 nT) GS period, dated February 2, 2015, with Bézier curves (BC) [3134] and an artificial neural network (NN). The reason why the essay commences with the IRI [9, 3537] TEC atlas is that it does not need interpolation. An illustration of the interpolated TEC atlas can be noticed in the validation subsection of the modeling part. The reader can frequently encounter IRI TEC map models or comparisons in the literature. Although the essay demonstrates the TEC estimation through modeling, its goal is to draw attention to the Bézier curve. Analyzing data obtained from OMNI is hourly.

BC developed by Pierre Bézier in the 1960s is commonly employed in graphics, geometry, … etc. for characterizing and controlling smooth curves. A BC is a parametric curve illustrated and represented by a set of control points. It smoothly forms the shape characterized by these interpolated control points. The beginning and end control points must be on the curve. The degree of a BC refers to the number of control points minus one point. The first-degree BC, also known as a linear curve, is a straight line part specified by two control points. The second-degree BC, also known as a quadratic curve, is specified by two control points. The third-degree BC, also known as a cubic curve, is specified by four control points. A BC lies wholly within the curved exterior of its control points. This property provides that the curve stays within the region specified by the control points. The reliability of the BC model estimation results is evaluated by the absolute mean error (ME) score.

The NN [9, 11, 14, 23, 3845] model is an instrument that imitates the human brain and converts inputs to outputs. The data set sent to the input layer is processed in the hidden layer and presented to the output layer as expected products. Neural neurons provide interaction and communication between layers. The number of neurons must be preferred just enough to avoid memorization. The causality principle [30] governs approaches for the NN iteration. In the NN, the solar wind parameters are marked as the cause, and the TEC map is marked as the effect. By the causality, solar wind parameters are the inputs of the NN model and the TEC map is the output of the NN model. The NN uses the backpropagation iteration of Rumelhart et al. (1986) via the Scaled Conjugate Gradient (trainscg) training algorithm. The TEC data of the paper is from the North-Mid-Atlantic at 52.635° N and 31.884° W. Reliability of the NN model estimation results are evaluated by the correlation coefficient (R), the means square error (MSE (TECU)), and the ME. The results agree with the literature [1013, 36, 46].

The first part of the work presents the related previous works, the following part reports GS data and dynamic, the third part presents models, and the last part exhibits the conclusions of the work.

2. Data

This work models and associates the TEC (TECU) map from IRI-2012. For the TEC map, the North-Mid-Atlantic (52.649°N–31.902°W) coordinates are selected.

The solar wind parameters that form the inputs of the artificial neural network (NN) model are the magnetic field (nT) component, the electric field E (mV/m), the dynamic pressure P (nPa), the proton density N (1/cm3), the solar wind speed (km/s), and temperature T (K). Figure 1 illustrates these parameters and the SYM-H index throughout five days (120 hours). The February GS is located central of the five days (from SPEDAS).

For February 02, 2015, moderate GS: Before detailing this GS, it is suitable to glance at the physical causality of the storm’s procedure. The GS consists of three phases: The sudden commencement, the main phase, and the recovery phase. When researchers discuss any GS, they usually work a 120-hour [1013, 46] time frame with hourly data. In this frame, the GS hour is located in the middle of the time interval. In the first stage of these 120 hours, the solar wind slows down and weakens. However, just after the wind speed reaches its minimum value, unexpected accelerations in the (nPa) dynamic pressure and the N (1/cm3) proton density happen. This sudden peak causes jets from the Sun outward, which can exceed 800 km/s. With this jet (burst), the energy-dense cloud of gas and particles is released into the interplanetary hole. This event is a coronal mass ejection (CME). The GS commences with the first CME. Depending on the severity of the GS, more than one CME can be observed before the event. With the bursting of the CME, the high velocity plasmatic, electromagnetic particle cloud reaches the upper atmosphere of the Earth and causes magnetic disturbance. The magnitude of this anomaly-disturbance caused by GS is measured with the disturbance storm time (Dst (nT)) zonal geomagnetic index. The Dst (nT) score of −50 nT to −30 nT represents a weak GS, a score of −100 nT to −50 nT shows a moderate GS, and a score of −200 nT to −100 nT represents a severe GS. After the first (or more) CME, the (nT) component of the magnetic field orients from northward to southward. In this period, positive magnetic field values are replaced by negative values. A few hours after the magnetic field indicates the minimum rate, the Dst index also hits its minimum rate. This sign belongs to the main phase period of the GS. The main phase causes intense magnetic-electric field fluctuations as well as the maximization of energy-dense plasmatic particles in the upper atmosphere. Magnetic-electric field fluctuations can cause disruptions and potential damage to power grids and pipelines, affect radio communications, cause disruption or interruption of radio communications, especially at high latitudes, cause drift in satellite communications, radiation hazards, and malfunctions in onboard electronics, and affect the operation of satellites. The anomaly of electromagnetic oscillation can also affect models of space weather forecasts. The recovery phase observed after the main phase is the return of the storm dynamics to the prestorm. GS conditions calm down, and GS indicators return to prestorm values. In this period, the magnetosphere-ionosphere coupling also regains its magnetic calm before the GS.

According to Figure 1, during the 5-day GS period, the (nT) magnetic field oscillates between a maximum of 7.1 nT and a minimum of −9.9 nT. The field marked the smallest rate of −9.9 nT on February 2, at 00:00 UT. 6 hours after the field points to its smallest rate, the Dst (nT) index signs to its smallest rate of −55 nT at 06:00 UT. This situation can be called the response time of the Dst index to this action of the magnetic field. Generally, the response time is around 1–3 hours in weak storms and around 3–7 hours in moderate storms. The first CME of the February GS is observed on January 31 at 19:00 UT, with the solar wind speed descending to around 400 km/s. When the first CME explodes, the P (nPa) pressure jumps from 2.65 nPa to 3.37 nPa and the N (1/cm3) proton density jumps from 8.1 1/cm3 to 10.1 1/cm3. A few hours later, at 23:00 UT, the P pressure points to one of the high rates of 6.76 nPa and the N proton density to its maximum rate of 20.5 1/cm3.

The binary relation of the February GS is exhibited in the Pearson correlation matrix table (Table 1). When the table score approaches ±1, one observes a strong relationship. The hierarchical appearance of data and their scattering are displayed in Figure 2.

One may see a strong binary correlation between the -E, the N-P, and the T-. It it needed to recall that the minus score shows an inverse correlation. The binary relationship established in Table 1 is determined by equation (9).

The cluster in Figure 2 may be seen at two headlines that T (K) with others parameters and the TEC-T association. It is seen that there are three groups, E-T-, P-N, and TEC- from Figure 2.

3. Modeling

In the modeling part, the Bézier curve (BC) and the artificial neural network (NN) estimation model framework are discussed for the TEC map of the moderate geomagnetic storm (GS) on February 2, 2015.

3.1. Bézier Curve (BC)

Before presenting the BC, it is helpful to take a look at the basic differential geometric concepts.

Let f be a function and f: n. If the function f has partial derivatives at each point of n from the k. step and these derivatives are continuous, it is said that the f function is of the class . The set of all functions from the class from n to is denoted in the form (n, ). If the function f has partial derivatives from every step at every p point of n, it is said that the function f is of the class or is a uniform (smooth) function. The set of all functions of the class from n to is denoted in the form (n, ). For n, if the f function is smooth in at least one open neighborhood of the p point, the f function is said to be of the class at the p point, or it is a uniform (smooth) function. Let φ be a function, and φ: nm, φ = (f1, f2, …, ) is m dimension. If the functions are uniform functions, then the φ function is said to be uniform [47].

Let φ be a function and φ: nm, φ = (f1, f2, …, ). The Jacobian matrix of the f function at a u point is as follows:here is a partial derivative.

Let α be defined α: In, where I is an open interval of . A uniform transform α is called a curve in the n space. When an α curve is given in n, n components of this transformation, expressed by α = (α1, α2, …, ) are mentioned. The presence of partial derivatives for each component, well as the Jacobian matrix, exhibits the uniformity of the curve (function).

is an n-degree binomial expansion. By this expansion, Bernstein polynomial is , where 0 ≤ u ≤ 1.

For any u rate and any degree of the specified curve . One may see from this property the invariance of the BC [31, 32] under affine transformation. Namely, the association between the BC and its control points is invariant. A Bézier formula ishere Pk is the control point, u is parameter 0 ≤ u ≤ 1 form the graph, and n is the degree of the formula. The degree of the Bézier depends on the control points’ number. n-degree graph must be contained n+ 1 control points. The first and the last control points must be the initial and end points of the graph, respectively; but the other points commonly do not touch the graph. One may see P(0) = P(1) = 0 from equation (2). It needs show that whether the first and the last points are on the graph. One can see from the tangent. The first order derivative of Bézier’s formula is presented by Theorem 1.

Theorem 1. P(u) Bézier curve’s derivatives ishere [48]. The rth order derivative of the Bézier’s formula is (for u = 0 and u = 1)here r is a parameter k ≤ r ≤ n. It can be seen from equation (4) and (5) that the initial and end control points are tangent to the curve.

Some BCs can be seen in Figure 3

In this study, composite Bézier plots are employed, each of which is of the third order (cubic). The third-order Bézier formula and its matrix presentation are as follows:here are the control points and u is the parameter 0 ≤ u ≤ 1. It is immediately apparent that this formation denotes a curve. Let α be defined α: I3, where I = (0, 1).

The Jacobian matrix of this matrix is

Equation (8) reveals that the third-order structures used in this study can be called curve. The segmented curve provides many conveniences in BC modeling. One may realize that when the control point increases, the ease and comfort of the manipulation opportunity decreases. When the piecewise continuities of these curves are discussed, the continuities should be considered. If the curve is to be modeled in two separate segments, the endpoint of the first curve must be the initial point of the second curve. This state is the continuity condition. In this problem, all BCs provide the continuity condition. It means that the endpoint of the first curve is located at the initial point of the second curve, the endpoint of the second curve is located at the initial point of the third curve, … etc. Thus, the family of piecewise-continuous uniform composite curves forms the BC (cubics). The study models 24 different BCs for the 120 hour TEC (TECU) map. By creating a piecewise-continuous curve family, it is reached from the part to the whole. In this study, the average of the data is used to determine the control points, unlike the traditional least squares method. Except for the initial and endpoints of 24 different, continuous BCs, the other control points are determined by calculating the individual averages of the TEC data for each curve segment. That is, for the first curve, the initial and endpoints are taken as the actual TEC map, while the average of seven separate TEC data for the and points is taken. Thus, the model of the curve predicts each TEC map with the help of these control points and different u parameters. The TEC estimation results of the BC model are notably remarkable. The estimation model curve predicts the TEC map for almost all hours of the 120 hour storm period with an absolute mean error (ME) of 0.0901 and a variance value of 0.0138. The ME of the estimated TEC can be determined by the , where TECest is the estimated TEC value. In the BC model, the TEC map average is 8.60085 TECU, and the estimated TEC map average is 8.5837 TECU. This model deviates from this error mean only in the 4 (four) hours. Just for these four hours, it cannot make a successful prediction like other hours. The deviations of the 2nd, 75th, 110th, and 111th hours can be observed in Table 2.

In addition to the ME score, it would be appropriate to present the R correlation coefficient, which will also be used in the NN model. R correlation constant is:here cov(x, y) is covariance of x variable (actual TEC map) and y variable (estimated TEC map), var[x] is variance of x, and var[y] is variance of y variable. The BC estimation R rate is 99.96%. Figure 4 exhibits TEC (TECU), TECest (TECU) maps with their error and R correlation coefficient. It seems that the estimation outcomes are satisfactory.

In this part, the NN model framework is discussed.

3.2. Artificial Neural Network (NN)

In the NN demonstrating of TEC (TECU) map, the solar wind parameters are employed as the input and TEC map as the output [39]. Figure 5 pictures the NN background.

The NN is a machine-learning prototype inspired by the design and the mechanism of the human brain. The NN is used on many problems such as model formation, data processing and estimation, image processing, language processing, pattern-relationship building, etc. The NN architecture composes of layers that are interconnected by artificial neurons. If the interconnection is governed by enough neurons, undesirable conditions such as memorization or inability to learn are avoided [49]. If the NN model cannot be fed by enough neurons, complete learning does not occur. Similarly, if the NN model is fed by a big amount of neurons, the model prefers to memorize, since memorization does easier for the model instead of the learning. The NN framework be formed the input layer, the hidden layer(s), and the output layer. The data called input is given to the input layer unprocessed and it is the first layer. The cause-effect affinity should be the main structure of all models according to the causality principle. In a GS solar wind parameters are the cause and the zonal geomagnetic indices or the TEC is the effect. The hidden layer (s) comes after the input layer [39, 50]. Its name is hidden because the training data and tools can not be directly seen. The hidden layer is the center of the learning zone. The training is accomplished by neurons. Each neuron in the network operates a simple mathematics. It accepts data from the input layer, applies a weighted sum of inputs, adds a bias term, and then departs to the result through an activation function. The activation function provides a nonlinear perspective to the NN model after the weighted sum of the inputs is calculated and their biases are determined. Typical activation functions are sigmoid-logistic (results are observed between 0 and 1) function and tanh (results are observed between −1 and 1). Weight and bias specify the strength of the associations between neurons described by weights. During the training, these weights are iteratively adapted to minimize the difference between the forecasted outputs and the actual targets. Bias is a supplemental parameter that allows revising the activation threshold of a neuron. The last segment of the network is the output layer. It produces outcomes-estimation of the model. This paper’s outcomes are the IRI-2012 TEC (TECU) map. In the NN model, training is one of the most critical phases that reveal the stability of the model. The iterative approach of forward propagation, loss calculation, or backpropagation is iterated over numerous epochs (iteration) until the network comprehends to make accurate estimations. By adapting the weights and biases during the training procedure, the NN learns to generalize and make forecasts on data, which is the ultimate goal of the learning procedure. The forward propagation approach flows from the input layer to the output layer with iteration. The input variable is multiplied by the weights, and the consequential rates transmit via the activation functions to build the forecasts. In backpropagation, feedback is provided after each iterative step (epoch) to keep the error score at a minimum and to yield accurate results. The NN performance is evaluated by means of the loss function and its tools. The loss function quantifies the difference between the estimated results and the actual targets. The standard loss function employs the (R) correlation constant (equation (9) and mean squared error (MSE) for classification and evaluation. Backpropagation is the approach of revising the weights and biases of the NN’s variables to minimize errors (deviation). This is accomplished by computing the gradients of the loss involving the network’s variables and then adapting the weights utilizing optimization methods like gradient descent or its variants. The gradient descent procedure orients the total iteration operating the weights of the data. Newton’s procedure [51] and gradient descent are traditional optimization techniques in the backpropagation approach.

This paper’s NN model uses the equation (10) as an activation function,here is the weight, y is the x-dependent variable of the initiation signal, x is the input (dependent variables), and b is the bias. The sigmoid transfer function is:here f is the logistic function.

After the model examines by the equation (11) sigmoid function, the TEC map is estimated as a result of the NN with the linear function from the output layer. In the work, five days (120 hours) of data for the forecasting. The NN uses 84 hours (70%) of the data for the training, 24 hours (20%) of data for the testing, and 12 hours (10%) of data for validating. This paper prefers backpropagation iteration is used for minimalizing of the NN prediction model’s deviation-error. The gradient descent approach orients the total iteration using the weights of the variables. The paper selects the Scaled Conjugate Gradient (trainscg) training algorithm for IRI-2012 TEC estimation. This work employs 35 neurons for interacting of the layers, R (equation (9) and MSE (TECU) score for evaluating of the NN model. MSE (equation (12) is:

When the NN model concludes a task, it creates appropriate outcomes. The essay tries to avoid memorizing in its repetitions. Iteration of the time series are finalized after iteration hold the stability of the training-test-validation updates-epochs (Figure 6). MSE (TECU) reaches constancy means that the iteration should finish. Figure 6 exhibits MSE rate and R correlation coefficient of the estimated TEC scores for the GS on February 2, 2015.

The average of the actual TEC (TECU) map across 120 hours of the NN prediction model is 8.6958 TECU and the average of the estimated TEC map is 8.6871 TECU. The MSE (TECU) score is calculated as 0.71958 TECU at the end of 47 iterations (epoch). The R correlation coefficient is calculated as 99.59%.

Observing the NN estimation results obtained in the work together with the literature gives the reader the opportunity to compare. One can find some selected discussions in Table 3. Figure 7 exhibits the training, validation, and testing R scores of the modeling

For the February GS, the R values of this work are 99.8% for the training, 99.2% for the validation, and 99.1% for the testing algorithm. It is observed that there is harmony among the R scores.

3.2.1. Validation of the Problem

Lastly, the reader can witness the validation of the R correlation constant’s reliability and CODE TEC atlas modeling with different coordinates. Is the R correlation score of the models obtained randomly? Does the score relate to the event(s) of the problem? Or are the scores achieved by random? Accordingly, the reliability and meaning test may be accomplished with the traditional null hypothesis. Three individual phases ought to be tracked:(i)The hypothesis is claimed.(ii)The t score is computed.(iii)The score is compared with the table’s t-score. If the t-rate is greater than the table score, the null hypothesis is rejected. If it is smaller, it is accepted.(iv) hypothesis: the R correlation constant is a random rate.(v)SR is provided that t = R/SR and n is 120 hours. means that the degree of freedom. For the moderate geomagnetic storm, the Bézier and ANN model’s TEC Atlas t-scores are t = 2.43 and 2.41, respectively.(vi)t-score is (from the table) 1.66 for the 95% confidence interval and 2.36 for the 99% confidence interval. The considered t-rate is larger than the table score. It means that the null hypothesis is rejected. Hereby, the R rate of the paper’s models is significant and related to the problem. They are not random.

If the reader glances at it from a different perspective, (s)he can see that similar results can be conducted with the CODE TEC atlas. In this part, the work exhibits the TEC atlas (interpolated) collected from the CODE. The CODE GIMs’ span ±87.5° latitudinal and ±180° longitudinal spectrum with a 2.5° × 5° spatial and hourly resolution [53]. The TEC atlas interpolation is formed with the assistance of four points nearest to the specified location. The latitude and longitude gaps are determined as 35,000 N-37.500 N and 25.000 E-30.000 E for the interpolation. This paper presents all TEC atlas of the close neighborhood that contains the Bodrum (36.929 N-27.414 E) region (Turkey). 365 day, hourly TEC data is from 2017. The bivariate interpolation is conducted on the specified points [2, 54]. To compute the related TEC data, the calculated scores are divided by ten.

The TEC atlas acquired and interpolated from another zone (coordinate) for the validation of the model provides acceptable outcomes like the null hypothesis when modeled with the procedure employed in the investigation. Figures 8(a)8(c) exhibits the outcomes.

Here TEC is the actual TEC data and TECest is the estimated TEC score. Figure 8 displays the model results of 365 days of the TEC Atlas for the 2017 year, with mean absolute error. The Bézier model of the TEC atlas is established by second-order curves (Bézier quadratics). The visual of the Bézier curve in Figure 8 is a second-order (quadratics), class model formed by piecewise, segmented, and continuous curves. The ANN model is the same model used in the study.

4. Conclusion

Comparing an artificial neural network with a mechanical curve package is an invaluable experience for the paper.

While the network is computer-based, the curve just stands on a classical differential geometry approach. One can presently catch that the model results of the classical Bézier curve agree with the neural network’s ones. The success of the Bézier curve in modeling hides in the harmony of piecewise continuous curves with control points. Instead of fitting a single curve to all TEC maps, working with piecewise continuous curves, where each segment is a copy of each other, minimizes the error. The 120 hour TEC map is treated with 24 distinct segments, and these segments obey the causality of the geometric curve approach. The prediction outcomes of the curve model are evaluated with the mean absolute error and R correlation coefficient. The mean absolute error of the TEC prediction model of the Bézier curve is 0.0901% and the R correlation coefficient of this model is 99.9%.

Another contribution to the literature of the paper is that the TEC map prediction results of the network model are satisfactory. When the consequences of the paper are compared with the discussions presented earlier, they are competitive. The network model is approached with a mathematical perspective and the causality principle governs the models. It means that the February moderate storm solar wind parameters data are the cause and the TEC map model is the effect. The network model results are evaluated with the mean square error and the R correlation coefficient. While the R correlation coefficient score of the results is 99.6%, the mean square error rate is 0.71958 (TECU) after 47 epochs (iteration).

With the excitement of introducing the Bézier TEC (TECU) model to the reader for the first time, the study leaves some of its limitations to subsequent discussions. The efficiency of the quadratic curve prototype of the model should be discussed. Curves are selected from class . How does the class curve model with stricter rules affect the Bézier prototype? How does the model work over the long term (time interval)? Can this mechanical model, like the neural network, produce the same effective results for many more data sets? The specification of control points that are not on the curve is performed by averaging. This selection needs different methods like the least square method. The Bézier model gives significant results in Euclidean topology. The results should also be checked for structures where the topology changes. Achieving the same results in different topologies will be strengthened the place of the model, which does not require any preliminary preparation, in astrophysical studies.

Today, the artificial neural network approach has a methodological influence on space exploration. Estimating the TEC map with the Bézier curve model, which is not often encountered in the TEC-ionosphere modeling literature, has the potential to offer a different perspective in space weather studies and Earth-ionosphere discussions.

Data Availability

The data used to support the findings of this study are available on the following pages: URL 1: NASA OMNI web, https://omniweb.gsfc.nasa.gov/form/dx1.html URL 2: International Reference Ionospher, https://ccmc.gsfc.nasa.gov/modelweb/models/iri2012_vitmo.php If the data is requested, it can be obtained from the authors.

Disclosure

The abstract of this manuscript was presented orally only at the conference with the same title (see [55]) by the first author without any proof of the results but all the results in this manuscript were carried out with the significant contributions of the second author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.