#### Abstract

To address the difficulty of estimating the drift of the navigation marks, a fractional-order gradient with the momentum RBF neural network (FOGDM-RBF) is designed. The convergence is proved, and it is used to estimate the drifting trajectory of the navigation marks with different geographical locations. First, the weight of the neural network is set. The navigation mark’s meteorological, hydrological, and initial position data are taken as the input of the neural network. The neural network is trained and used to estimate the mark’s position. The navigation mark’s position is taken at a later time as the output of the neural network. The difference between the later position and the estimated position obtained from the neural network is the error function of the neural network. The influence of sea conditions and months are analyzed. The experimental results and error analysis show that FOGDM-RBF is better than other algorithms at trajectory estimation and interpolation, has better accuracy and generalization, and does not easily fall into the local optimum. It is effective at accelerating convergence speed and improving the performance of a gradient descent method.

#### 1. Introduction

A navigation mark is an artificial mark that warns of the boundary of a channel. It helps ships navigate, locate, and avoid obstacles [1]. Navigation marks placed at sea are extremely susceptible to drifting due to wind and waves. Drifting of the navigation marks poses threats to the safety of the ships. Cheng [2] used an optimized calculation method of anchor chain length and found that the maximum drift value can be reduced to below the drift alarm threshold. This reduces the number of navigation mark’s false alarms.

In recent years, artificial neural networks have been widely used in pattern recognition, expert systems, robots, complex system controls, and so on. The weights of the neural networks are obtained by training the networks. Gradient descent (GD) is a basic method for updating and optimizing the weights of neural networks. Yao et al. proposed a three-term gradient descent method with subspace techniques [3]. Liu and Lan presented an adaptive neural gradient descent control for a class of nonlinear dynamic systems with the chaotic phenomenon [4]. Standard GD, however, has two main shortcomings: slow training speed and ease of falling into the local optimal solution. It takes a long time to find a convergent solution, and at each step, the direction has to be calculated and adjusted. When applied to large datasets, every input sample needs to update its parameters, and every iteration needs to traverse all samples. Once falling into a saddle point, the gradient becomes zero, and the model parameters are not updated.

The gradient descent with momentum (GDM) method was proposed to solve this problem [5]. By accumulating past gradient values, fluctuation on the path to the optimal value is reduced and convergence is accelerated. GDM can accelerate learning when the direction of the current and past gradients is uniform, while restraining oscillation when the direction of the current and past gradients is inconsistent.

Fractional calculus exists since 1695 when Leibniz and L’Hospital initiated the discussion aimed at calculus based on fractional derivatives and integrals. Fractional calculus has demonstrated more advantages than integral calculus in the field of neural networks. Wang et al. [6] proposed a fractional gradient descent method for the backpropagation (BP) training of neural networks. The Caputo derivative was employed to evaluate the fractional gradient of the error defined as the traditional quadratic energy function. However, the boundedness of the weight sequences was not rigorously proved. Khan et al. [7] proposed a fractional descent-based learning algorithm for training radial basis-function neural networks that is a convex combination of conventional and fractional gradients. However, its network architecture can be improved. Yang et al. [8] extended the steepest fractional descent approach to BP training of the feedforward neural networks (FNN). However, the results fluctuated with different fractional orders, requiring the optimal fractional order to be designated manually. Liu et al. proposed a quasi fractional-order gradient descent method with adaptive step size for system identification [9]. Wei et al. proposed a generalization of the gradient method with the fractional-order gradient direction [10]. Cheng et al. used the multi-innovation fractional-order stochastic gradient for identification of Hammerstein nonlinear ARMAX systems [11].

To improve the training ability of a neural network, this paper proposes a fractional-order gradient descent with the momentum method for training RBF neural networks. Compared with reference [12], the convergence of the proposed algorithm is proved in this paper. Compared with references [13, 14], this paper uses both the momentum and gradient descent method.

In recent years, many works have been achieved in the area of neural network. Zouari et al. proposed an adaptive backstepping control for a single-link flexible robot manipulator-driven DC motor [15]. Zouari et al. presented an adaptive backstepping control for a class of uncertain single-input single-output nonlinear systems [16]. Zouari et al. used a robust adaptive control for a class of nonlinear systems using the backstepping method [17]. Zouari et al. adopted a robust neural adaptive control for a class of uncertain nonlinear complex dynamical multivariable systems [18]. Zouari et al. proposed a neuro-adaptive tracking control of noninteger order systems with input nonlinearities and time-varying output constraints [19]. Haddad et al. used a variable-structure backstepping controller for multivariable nonlinear systems with actuator nonlinearities based on the adaptive fuzzy system [20]. Zouari presented a neural network-based adaptive backstepping dynamic surface control of drug dosage regimens in cancer treatment [21].

The training of neural networks has met some challenges, such as slow training speed, difficulty in parameter selection, fall into local optimum, and oscillation of the convergence process. The proposed method is used for predicting the drift of the navigation marks. The main innovations of this paper are as follows:(1)Fractional calculus is applied to the gradient descent with momentum generalization of the gradient method with the fractional-order gradient direction algorithm for training neural networks(2)The convergence of the proposed algorithm is proved(3)The new algorithm is used for predicting the drift of the navigation marks

The paper is organized as follows. Section 2 briefly describes the theoretical background and related works on the estimation of drifting of the navigation marks. Section 3 presents the proposed drifting estimation algorithm using fractional-order gradient descent with momentum for the RBF neural network. Experimental results are shown in Section 4, and Section 5 concludes the paper.

#### 2. Related Work

##### 2.1. Dynamic Equation of Navigation Mark Drifting

To fix a navigation mark in a designed position so it can endure a severe marine environment without shifting, the mooring equipment is generally composed of the main anchor chain, a sinking stone, an auxiliary anchor chain, and an anchor. Therefore, compared with its designed latitude and longitude, the drifting of a navigation mark is relatively small. The drifting motion of a navigation mark can be regarded as a linear model.

The delays in communication from the equipment influence the reporting intervals on the positioning of the navigation marks. If is defined as the *k*^{th} time interval between the navigation mark’s position reports, then denotes the latitude of the navigation mark, denotes the longitude of the navigation mark, denotes the speed over the ground of the navigation mark, and denotes the moving angle over the ground of the navigation mark. The latitude and longitude of the navigation mark at the *k* + 1^{th} time instant can be calculated as follows:

Define as the state vector, that is,

Define as the input vector, which can be calculated as follows:

Consider the presence of noise in the actual system. Define as system noise. The system state equation of equations (1) and (2) can be written as follows:

Define as the measurement vector. Define as the measurement noise. The system observation value can be written as follows:

##### 2.2. RBF Neural Network

In 1985, Powell proposed a Radial Basis Function (RBF) method for multivariate interpolation. The most frequently used Radial Basis Function is the Gauss function:where is the input vector, denotes the Euclidean norm of , denotes the Radial Basis Function, denotes the central vector of the function, denotes the width of the Radial Basis Function, denotes the threshold vector, denotes the number of hidden layer nodes, denotes the number of input training samples, and denotes the output of the neural network:

##### 2.3. Fractional-Order Calculus

Fractional calculus is the operation of derivatives and integrals extended to fractional others. It provides a more precise tool for describing physical systems. The Riemann–Liouville (RL) fractional differintegral is one of its most common definitions. For a function *x* that is defined in [*t*_{0}, *t*], the RL fractional integrator is defined as follows:where is the fractional order, is the integral variable, and is the gamma function that is defined as follows:

The RL fractional derivative is defined as follows:where , and is a positive integer near . When *α* = 1, the algorithm will degenerate to the integral order case. When *α* is greater than 1, the performance will be worse.

Lemma 1. *For the function when , the following formula holds:*

#### 3. Main Results

##### 3.1. FOGDM-RBF Algorithm

Define as the expected output response of the times of iterations of the neural network. Then, the error signal can be defined as follows:

In 1986, American cognitive psychologists D. E. Rumelhart and J. L. McCelland introduced the generalized delta rule (GDR) in neural network learning. The delta rule and gradient descent minimize the square of the difference between the actual and the desired output of the neural network. The loss function generally uses the least square error function. After defining the error criterion function, the next step is to adjust the weight to minimize the criterion function. The loss function can be minimized by improving the training mode. Therefore, the objective function is often defined as follows:

##### 3.2. Denote

Based on the gradient descent with momentum algorithm, the following can be obtained:where is the learning rate and is the momentum coefficient designed as follows:where is the momentum factor and is the Euclidian norm. It used a modified Riemann–Liouville fractional definition [7]. According to the definition of the fractional derivative, the following can be obtained:

##### 3.3. Convergence Analysis of FOGAM-RBF

The relevant lemma is introduced first, and then, the convergence of the FOGDM-RBF algorithm is proved.

The following assumptions are given:

This condition can be easily satisfied since the most common Gauss function is uniformly bounded and differentiable.

Lemma 2 (see [22]). *Every bounded monotonic sequence of real numbers converges.*

Theorem 1. *Assume (A1), (A2), and (A3) are valid, and**Then, the following results hold:*

*Proof:. *by using the Taylor mean value theorem with the Lagrange remainder, the following can be obtained:From equation (22), it can be obtained that if , the following equation holds:Substituting equation (30) into the first item of equation (29), the following is yielded:From equation (18), the following can be obtained:Substituting equation (26) into equation (32), the following is yielded:Substituting equation (33) into equation (31) yields the following:From equation (24), the following can be obtained:Substituting equation (35) into equation (34), the following is yielded:If and there is no loss of generality, it can be assumed that .

From equation (22), the following can be obtained:Substituting equation (37) into equation (34), the following is yielded:From equations (36) and (38), the following is found:From equation (18), the following can be obtained:Substituting equation (20) into equation (40), the following is yielded:Substituting equation (41) into the second item of equation (29), the following is yielded:From equation (23), the following can be obtained:Substituting equation (43) into equation (42), the following is yielded:Substituting equation (23) into equation (44), the following is yielded:Substituting equation (45) into equation (29), the following is yielded:Substituting equation (39) into equation (46), the following is yielded:When equation (25) holds, the following can be obtained:This completes the proof of statement (26) of Theorem 1.

From equation (48), the monotone decreasing sequence is bound below . Hence, from Lemma 2, is convergent. Therefore, there exists , such thatThis completes the proof of statement (27) of Theorem 1.

##### 3.4. Denote

Substituting equation (50) into equation (47), the following is yielded:

Thus, the following is found:

Since , the following can be obtained:

When , the following is true:

Thus, the following is obtained:

This completes the proof of statement (28) of Theorem 1.

##### 3.5. Navigation Mark Prediction Based on FOGDM-RBF

Prediction of the navigation mark is conducted in the following manner. Firstly, the weight of the neural network is set. The meteorological, hydrological, and position data of the navigation mark are used as the input for the network. Future position data is used as the output of the neural network. The difference between the mark’s future position and the estimated position obtained from the network is taken as the error function and used to train the network. The weights of the trained neural network are used to estimate future positions, and the difference between the estimated position and the actual position is used to verify the neural network. The proposed algorithm is shown in Table 1.

#### 4. Experiment and Analysis

##### 4.1. Introduction of the Experimental Environment

Four navigation marks at the Xiamen city’s port were used to evaluate the performance of the improved Kalman algorithm. Table 1 lists position samples of the navigation mark #1 on January 1, 2019. Its designative position was 117.50708330 E and 23.75405560 N. The experiment used the mark’s telemetry and telecontrol system. A GPS intelligent-integrated solar beacon lamp was used. Table 1 lists the position samples of the navigation mark #4 on January 1, 2019.

Table 2 lists the wind speed, maximum wind speed, wind direction, and wind scale collected by the big buoy #4.

##### 4.2. Probability Density of Drifting

The probability density of the navigation mark’s drifting is shown in Figure 1.

**(a)**

**(b)**

Most of the hot water areas for most of the light buoy offset are near the geometric center position of the active water area, which shows that, in the process of migration, these light buoys have certain regularity and show the characteristics of divergence from the center.

The offset hot water area of the light buoy deviates from the geometric center position of its active water area, which shows that the light buoys are asymmetric in the migration process, and their active hot water area tends to a certain area under the action of wind and current.

##### 4.3. Analysis of the Influence of Sea Conditions

The estimated trajectory of the navigation mark in the Xiamen Port in 2019 shows an ENE direction of the normal wind and SE and SW directions of the strong wind. The east-northeast monsoon with high wind speeds prevails between September and March, and small-speed southeast winds prevail between April and August. Therefore, in winter and spring, navigation marks generally tend to shift southwest, while in summer and fall, the navigation marks tend to shift northeast.

The tide in the Xiamen Bay is mainly caused by the introduction of the tidal waves from the open sea, which belongs to the regular semidiurnal tide area. The average duration of the flood tides and ebb tides is the same. The direction of the fluctuation and the tidal current in the waters near the main channel is consistent with that in the channel. Therefore, in each ebb and flow period, the navigation mark is affected by the current, and its offset is reciprocated along the channel direction.

Affected by the wind and the current, the navigation mark’s deviation direction in Xiamen Port’s main channel changes from season to season, and its deviation law mainly depends on the impacting force of wind and current. If the wind force is dominant, the navigation mark deviates mainly from the downwind. If the flow force is dominant, the navigation mark deviates mainly along the channel axis.

##### 4.4. Analysis of the Influence of Months of the Year

Figure 2 presents a month-to-month change in the drifting variance of the navigation mark #4. Its unit is 10^{−8} degrees. The blue cuboids represent the latitude variance from the designative position. The yellow cuboids represent the longitude variance from the designative position.

It can be seen that the drifting variance is smallest in the summer, especially in July. This is because the flow of ship traffic in the summer is dense. The wave wake pushes the navigation mark southwest. When the southeast wind prevails in Xiamen between April and August, it pushes the navigation mark southeast. The two forces counteract each other to a certain extent.

##### 4.5. Comparison with Different Fractional Orders

The test accuracy with different fractional orders and batch sizes are shown in Figure 3.

The train accuracy with different fractional orders and batch sizes are shown in Figure 4.

Figure 5 shows the loss curve comparison with integral calculus.

Figure 5 shows that the proposed algorithm has demonstrated more advantages than integral calculus in the field of neural networks with faster convergence speed.

#### 5. Conclusions

This paper studied a fractional-order gradient descent with momentum for the RBF neural network to estimate the drifting trajectory of a navigation mark. In the study, fractional-order calculus was applied to a gradient descent with the momentum algorithm to train the neural network. The convergence of the proposed algorithm was then proved, and the new algorithm was used to predict the drift of the navigation mark.

The limitation of the current algorithm is that it cannot solve the severe and irregular drifting, such as those produced by a typhoon. Further research is needed to improve the algorithm through artificial intelligence forecasting based on historical data.

#### Data Availability

The data used to support the findings of the study are included within this paper.

#### Conflicts of Interest

The author declares no conflicts of interest.