Abstract
In this study, a hybrid metaheuristic algorithm chaotic gradientbased optimizer (CGBO) is proposed. The gradientbased optimizer (GBO) is a novel metaheuristic inspired by Newton’s method which has two search strategies to ensure excellent performance. One is the gradient search rule (GSR), and the other is local escaping operation (LEO). GSR utilizes the gradient method to enhance ability of exploitation and convergence rate, and LEO employs random operators to escape the local optima. It is verified that gradientbased metaheuristic algorithms have obvious shortcomings in exploration. Meanwhile, chaotic local search (CLS) is an efficient search strategy with randomicity and ergodicity, which is usually used to improve global optimization algorithms. Accordingly, we incorporate GBO with CLS to strengthen the ability of exploration and keep highlevel population diversity for original GBO. In this study, CGBO is tested with over 30 CEC2017 benchmark functions and a parameter optimization problem of the dendritic neuron model (DNM). Experimental results indicate that CGBO performs better than other stateoftheart algorithms in terms of effectiveness and robustness.
1. Introduction
Metaheuristic algorithms (MHAs) have developed rapidly in the field of computational intelligence [1, 2]. MHAs perform effectively and efficiently for many functional optimization problems or realworld problems [3, 4]. Most of the MHAs drew their inspiration from natural phenomena or mathematical formulas. Populationbased MHAs can be applied to a variety of optimization problems in comparison with the limitations of deterministic algorithms. Traditional deterministic algorithms are usually proposed for specific problems and sometimes involve the local characteristics of the objective function. By contrast, MHAs are not limited and can be applied to any optimization problem, and compared with deterministic algorithms, these are more understandable and extensible. The vast majority of experimental data show that MHAs have outstanding performance for functional optimization or practical problems [5, 6].
Populationbased MHAs can be roughly divided into three categories. Evolutionary algorithms are inspired by the biological evolution of nature. For example, differential evolution (DE) employs crossover and mutation operator to generate better offspring through difference between parents [7]. Genetic algorithm (GA) and evolution strategy (ES) are all common evolutionary algorithms [8–10]. Swarmbased algorithms are also the significant part of MHAs that contain ant colony optimization (ACO) [11], grey wolf optimizer (GWO) [12], particle swarm optimization (PSO) [13], artificial bee colony (ABC) [14], bat algorithm (BA) [15], gravitational search algorithms [16–18], and moth flame optimization (MFO) [19]. In addition to these mainstream algorithms; e.g., a large number of algorithms based on natural phenomena have emerged in recent years, vortex search (VS) was inspired from vortex pattern created by vortical flow of the stirred fluids [20], mine bomb explosion concept is the inspiration of mine blast algorithm (MBA) [21], and gravitational search algorithm (GSA) is based on the law of gravity [22]. Despite novel algorithms being sprung up rapidly and their inspirations being diverse, arbitrary MHAs cannot perform well for all optimization problems on the basis of no free lunch theory. Researchers found that searching characteristics of MHAs can be divided into exploration and exploitation in common [23, 24]. Exploration indicates the ability of finding some potential points in the global search space, and exploitation represents how algorithms find a better neighbouring point of optimum. On this basis, balancing between exploration and exploitation becomes an important way to optimize performance of MHAs [25].
Most of MHAs set up a moderate tradeoff between exploitation and exploration [2], but for better performance of the algorithm, many studies make decisions to create the hybrid algorithm which incorporated local search operator into original global search [26]. For the past few years, many attempts on the incorporation of local search operators or adaptive strategy into global search have been done and with excellent achievements, in addition to regular singleobjective function optimization [27, 28], some of them perform remarkably even for multiple objective function optimization problems [29, 30].
Gradientbased optimizer (GBO) is a novel populationbased algorithm based on Newton’s method [31] to solve realworld optimization problems in the field of engineering. GBO employs two different strategies to find the optimal value in the whole search space and performs well for function optimization, but the exploration ability is limited to multiple factors which include the fixed parameter and similar operators. Therefore, we attempt to incorporate a random strategy into GBO by some mechanism to improve the performance. It is not hard to see that chaotic local search (CLS) has remarkable capacity to promote performance of many metaheuristics algorithm because of the ergodicity and nonrepetition, e.g., chaotic differential evolution (CJADE) [32], chaotic cuckoo search (CCS) [33], and chaotic whale optimization algorithm (CWOA) [34]. Hence, we attempt to hybridize GBO with CLS through some deterministic mechanism propose a new algorithm termed as CGBO, and testify the feasibility and efficiency of CGBO by 30 CEC2017 benchmark functions and a parameter optimization of dendritic neuron model.
The rest of paper is constituted as follows: a detailed introduction of GBO is given in Section 2. Chaotic search mechanism and various chaotic maps are depicted in Section 3. Hybrid algorithm CGBO is described in Section 4. Section 5 gives the related introduction of three experiments and experimental results. Finally, the conclusion is placed in Section 6.
2. Introduction of GradientBased Optimizer
In recent years, MHAs have been developed rapidly as optimization tools that successfully applied to a variety of complicated and realworld optimization problems [35, 36]. GBO was proposed based on Newton’s method and performed well in some engineering problems which included speed reducer problem, threebar truss problem, Ibeam design problem, cantilever beam problem, rolling element bearing design problem, and tension/compression spring design problem. According to theory of Newton’s method, GBO derives two primary operators, gradient search rule (GSR) and local escaping operator (LEO), which utilize these two strategies and searches for the global optimal point in the entire search space based on the populations.
2.1. Initialization
An optimization problem can be represented by three parameters: decision variables, objective function values, and constraint conditions. GBO is a populationbased metaheuristic algorithm, and each member of population is a decision variable which called individual or vector. The whole population can be depicted aswhere is the scale of population and represents the size of decision variables which is also called dimension. Initialization of each dimension of individual employs regular random operator which is formulated aswhere and express upper and lower boundary which stem from the constraint condition defined by a specific problem. It is worth mentioning that the boundary of each dimension of engineering problems or other realworld problems is disparate, but the constraint of CEC benchmark functions is uniform. Accordingly, relevant equations are shown in
Each individual corresponds to a target value of function , and our goal is to optimize individuals of the whole population to global optima by operators.
2.2. Newton–Raphson Method
Most equations have no rootfinding formula, so it is very difficult or even unsolvable to find the exact root. Therefore, it is particularly important to find the approximate root of the equation. The Newton–Raphson method (Newton’s method) is a method proposed by Newton in the 17th century for approximate solution of equations in real and complex fields, which employs the first few terms of Taylor series of function to find roots of the equation. Taylor series of can be represented as follows:where , , , and represent , , , and order derivatives, respectively. Since is close to 0, to the power of n approaches to 0, even negligible. After ignoring the remainder of Taylor formula, is set to zero, and the following formula can be derived:where is the first approximation solution to the root of equation. By repeating iterations, approximation can be depicted as
In other words, Newton’s method continually approximates the roots of equations by means of iteration process and Taylor series. In addition to this, new variant of Newton’s method was proposed, which can be formulated aswhere
Since the iterative process and optimization mechanism of Newton’s iterative method are very consistent with the solution of optimization problem, it is theoretically feasible to apply Newton’s method to optimization algorithm.
2.3. Gradient Search Rule
The gradient search rule (GSR) originated from Newton’s method, which can control all individuals in the population and make the whole population close to the global optimal region. The derivation of the GSR formula and the overall mechanism are described in detail. Firstly, function can be represented as the following equation based on Taylor series:where is increment; according to simultaneous equation (9) and ignoring Taylor series above the third order, we can derive the following formula:
From equations (6) and (10), the next point can be denoted as
GBO selects points and instead of two neighbouring points in equation (11). For general optimization problem, which is a minimization problem and the fitness of is greater than . In addition to this, substitutes for because it takes time to compute functions. Therefore, the difference between and can be formulated aswhere is a random number which obeys uniform distribution between 0 and , is a real number within , and and , respectively, represent the current worst and best point in the population. GSR can lead the candidate point to the better position, but for more superior balance between the exploration and exploitation, GSR will be modified by an adaptive parameter to enhance the exploration ability of GBO. The improved formulation is shown aswhere
and are set to 1.2 and 0.2; and , respectively, mean the current iteration number and maximum iteration number. generally decreases with the increase in the number of iterations, but it is not a regular monotonous increasing function. represents the difference between best individual and a random individual which be defined bywhere is a Ndimensional random vector with a range of and , , , and are different randomly selected individuals. According to the variant of Newton’s method which is proposed by Ozban, the equation of GSR can be modified bywhere is calculated by original GSR formulation equation (13). In addition to GSR, the direction of movement (DM) is created as an added increment to improve the performance of GSR; the process of DM promotes the speed of convergence and the equation iswhere represents a random number which is the same as the formulation of . According to GSR and DM, the updating operation can be represented aswhere and are two random real numbers within the range of and solutions , , and are given as
The whole searching process of GSR can be shown in Figure 1. Due to the limitation of single search operation, a local escaping operator is proposed to improve the performance of the algorithm.
2.4. Local Escaping Operator (LEO)
LEO is an optional operator to change the candidate solutions based on several acquired solutions. LEO generates two different solutions, and the final solution is randomly selected from these two solutions. The whole process of LEO is defined as follows:where is a random number between 0 and 1 and is a random number that obeys standard Gaussian distribution. and are step lengths which can be described aswhere , , and are random number andwhere is a randomly generated individual with a range of constraint and is the randomly selected individual in the population.
3. Chaotic Search Mechanism
Chaos refers to the unpredictable motion due to its sensitivity to initial values in the deterministic dynamical system. Chaos is a seemingly irregular and complex motion pattern in the real world [37]. Its characteristic is that the original orderly motion pattern following simple physical laws deviates from the expected regularity and turns into a disordered form under certain conditions. Chaos theory has been successfully applied to many fields which include evolutionary algorithms [38, 39]. Since most of the populationbased optimization algorithms contain the random operators, chaos can be utilized to replace the general random operator to optimize the global search algorithm due to its ergodicity and randomicity.
3.1. Chaotic Map
Chaotic maps are mainly adopted to generate chaotic sequences to adjust parameters during the population initialization and iterative process. It has a good effect in many bionic optimization algorithms, such as genetic algorithm [40], artificial immune system algorithm [41], and differential evolution algorithm. Chaotic maps are mostly employed in the adjustment of crossover, mutation, and selection operators. Here, we choose twelve common chaotic maps to establish the chaotic search mechanism:(1)Logistic map: It is also called unimodal mapping, which is a quadratic polynomial mapping. Logistic map is invariably used as a typical example to explain that many complex chaotic phenomena stem from a simple nonlinear dynamics equation. The formulation of logistic map is expressed as where is the chaos in iteration and the map functions are all the same. For , and chaotic sequence . It is worth noting that to keep the chaotic system from being disabled; here, is set to be 0.7.(2)Piecewise linear chaotic map (PWLCM): PWLCM has attracted much attention because of its simple representation and excellent dynamic characteristics. PWLCM is ergodic and has a uniformly invariant density function between the defined interval. The original mathematical formula is represented as where , , and .(3)Chebyshev map: the Chebyshev map originated from the Chebyshev polynomial which is inspired by expansion of cosine and sine of multiple angles [42]; the relative formula is given as where , , and the initial value is set to 0.7; if negative output appears, we take its absolute value.(4)Circle map: The circle map can be thought of as a onedimensional map that simplifies (due to dissipation) the more general map from a plane to itself [43]. The equation of circle map can be described as Chaotic sequence is , and initial value is also set as .(5)Cubic map: It is one of the simplest polynomial maps and the corresponding formula is defined as where and initial value .(6)Sine map: the input of a sine function is set to , and its corresponding output is between 0 and 1. The sine map is obtained by converting the original range of input to . It is formulated as follows: For , and .(7)Singer map: singer map is a onedimensional map function which can be represented as In our experiments, and . The value of is between 0 and 1.(8)Gaussian map [44]: Gaussian map is a onedimensional nonlinear map based on an exponential function. The formula of Gaussian map is formulated as For , and chaotic sequence .(9)Sinusoidal map [45]: this map generates chaotic sequences based on sine function, which is determinedly formulated as It is worth to note that these equations can be simplified to the following formula when and : Here, we set , , and .(10)Tent map: tent map has a special chaos mechanism which is akin to logistic map, and the whole mapping process can be expressed as where , , and .(11)Iterative chaotic map with infinite collapses (ICMICs) [46]: this map function generates chaotic signals with infinite collapses in a determinate region. The formula is shown as where is set to 15. There are negative numbers in the chaotic sequence generated by ICMIC, so we will call an absolute value operator when generating chaos.(12)Bernoulli map: Bernoulli map is a weakmixing piecewise function with ergodic properties. It can be depicted by
Here, and .
3.2. Chaotic Local Search
According to these twelve chaotic mapping formulas, we can obtain a series of chaotic sequence to replace pseudorandom numbers with a range of . Figure 2 can exhibit distribution status of numbers for twelve chaotic maps and each chaotic map has its own characteristic. From it, it is obvious that the singer map easily generates chaotic sequence with a range of and the circle map is easier to generate output of a number between . Due to these distinct numerical peculiarities of chaotic maps, we utilize a chaotic local search strategy to improve original GBO algorithm.
Recently, the local search strategy has been successfully employed to improve the global singleobjective and multiobjective optimization algorithms [47, 48], and the local search strategy is usually used to jump out of the local optimal point when global algorithm is trapped into stagnation, such as wellknown tabu search [49, 50] and simulated annealing algorithm [51, 52]. The CLS strategy was recently emerged as an efficient local search method, which is utilized to optimize many stateoftheart global optimization algorithms. The CLS strategy takes advantage of chaotic sequence with a range of for adjusting current individuals. The adjusting method can be formulated aswhere is the selected individual to be optimized, is the optimized individual by CLS, is the scale parameter to adjust the influence of chaotic sequence between , and and denote the upper limit and lower limit defined by the specific optimization problems, respectively. is the number which is obtained by the chaotic map, and is the parameter to change the range of between . The whole procedure of CLS can be shown in Figure 3.
4. Mechanism of the Proposed Algorithm CGBO
4.1. Motivation
According to the brief introduction of GBO and CLS, we can find that GBO generates new individual based on Newton’s method and current population; in other words, it takes advantage of information about existing population to create the new individuals, despite a mechanism LEO being integrated into the principal part of GBO to avoid the stagnation of the whole algorithm, GBO will still be trapped into local optima under some circumstances due to incomplete judgement standard and operators. At the same time, CLS can effectively avoid local optima because of ergodicity and randomicity, and it can adjust individuals only through the sequence produced by the chaotic map. If taking into account the exploration and exploitation of metaheuristic algorithm, GBO performs well for global exploration ability and CLS has excellent local exploitation capability, which promoted us to mix GBO with CLS to improve the performance of the original algorithm.
4.2. Mechanism of CGBO and Some FineTuning
On account of the above motivation, we incorporated CLS into GBO based on a randomly selection mechanism, modified the details, and proposed the algorithm CGBO. The pseudocode of CGBO is shown in Algorithm 1, where is the number of individuals in the population, expresses objective function, and , , and are positive integers between ; different from the original algorithm, we use Gaussian random numbers to generate , , and . is the probability of using chaotic search. In order to ensure the randomness of chaotic search strategy, twelve different mapping functions were selected with the same probability in this experiment. For local chaotic search strategy to better integrate into the original algorithm, we finetune equation (15) of GBO. is set to be

To make CGBO be easily understood, the flowchart is also shown in Figure 4. Due to chaotic local search greatly increasing the diversity of the population, our proposed algorithm will generate various individuals based on more complex population and have a higher probability of finding the global optimum in a wider search space, so that proposed algorithm CGBO theoretically achieves a better balance between exploitation and exploration.
5. Experimental Results and Statistical Analysis
To testify the effectiveness of our proposed mechanism, we select two different experiments to prove its effectiveness, which consists of conventional benchmark function optimization and a parameter optimization of the dendritic neuron model. For all experiments, we utilized a computer with processor Intel (R) Core (TM) i79700 CPU 3.00GHZ RAM (8 GB) to test.
5.1. CEC2017 Benchmark Function
IEEE CEC2017 benchmark contains three unimodal functions, seven simple multimodal functions, ten hybrid functions, and ten composition functions. Except for the unimodal functions, the others have quite a few local optimal points, which require excellent optimization algorithms with robustness and effectiveness. Therefore, we compare the experimental results of CGBO for CEC2017 with original algorithm GBO and some stateoftheart algorithms. In addition to the original algorithm GBO, genetic learning particle swarm optimization (GLPSO) [53], brain storm optimization based on orthogonal learning design (OLBSO) [54], dynamic neighborhood learningbased gravitational search algorithm (DNLGSA) [55], and multiple chaotic mapsincorporated grey wolf optimization algorithm (CGWO) [56] are selected. The relative parameter setting of algorithms to be compared is listed in Table 1.
In our experiment, thirty benchmark functions of CEC2017 are used to test the performance of algorithm. The population size of each algorithm is set to be 100, number of dimensions , the maximum of iterative number is 3000, and the running times of each algorithm are set to be 51 due to the randomness of metaheuristic algorithms, which are sufficient to evaluate the overall performance of algorithms. Table 2 lists twentynine convergence results of CEC2017 functions. It is worth mentioning that is usually discarded because of its instability, where and , respectively, represent average values and standard deviations of final optimal values under fiftyone tests. The highlighted numerical values have significant advantage in comparison with all other competitors. In addition to this, the last line severally signifies that our proposed CGBO performs than the rest of algorithms under five percent significant level. The experimental results manifest that CGBO is more competitive than other advanced algorithms for the optimization of hybrid functions and composition functions.
5.2. Convergence Graphs and BoxandWhisker Graphs
Except for table data, we will employ a variety of illustrations to demonstrate the advantages of CGBO. Several convergence figures and boxplot graphs for functions , and are summarized in Figures 5 and 6.
The horizontal axis and vertical axis of convergence figures, respectively, denote the number of iterations and average value of optimization. From Figures 5 and 6, we can visibly find out the overall slope of CGBO is significantly larger than that of other algorithms. Meanwhile, the final convergence results of CGBO are lower than of competitor algorithms. These characteristics reveal that CGBO can not only converge quickly but also effectively keep from trapping into local optima and seek out the global optimal solution for solving some hybrid functions.
Figure 6 exhibits three boxplot graphs, where the red plus sign represents the outlier which is defined as a measured value that deviates from the mean by more than two times the standard deviation. The top (bottom) line is the maximum (minimum) value except for outliers, the upper and lower border lines, respectively, denote the upper and lower quantiles, and the red line in the middle is the median. These boxplots demonstrate that most assessment criteria of CGBO are better than those of other algorithms. It proves that CGBO has stability and robustness for a part of function optimization problems.
5.3. Population Diversity
In order to prove that the proposed algorithm has better efficiency in global exploration capability, we calculate the population diversity of the two algorithms to compare the improved algorithm CGBO with GBO. The population diversity is an important evaluation for metaheuristic algorithms. An excellent improvement mechanism will keep a highlevel population diversity throughout the whole iterative process. Population diversity can be obtained by calculating the sum of standard deviations between each individual and the mean value of the entire population. Therefore, population diversity of th iteration can be formulated as follows:where returns the Euclidean norm, represents the mean value of each dimension in the whole population, and is the number of individuals. Several functions are selected to draw up the variation curves of population diversity during the whole convergence process, and the diagram is demonstrated in Figure 7.
The populationbased stochastic algorithms need to maintain a sophisticated population diversity during the inchoate iterative process to avoid falling into local optima. However, if population diversity keeps a high level all the time, its convergence speed is bound to be affected. From three diagrams, we can discover that CGBO has high population diversity before 1500 iterations and then steadily decline during the later optimization process in order to speed up convergence. This further demonstrates the efficiency of keeping the balance between exploration and exploitation for our proposed mechanism.
5.4. Contour Map
In order to more intuitively reflect the changing situation of individuals in the population, we utilized a twodimensional contour map to exhibit the entire optimization process, as shown in Figure 8. We select three kinds of individual distribution under the same number of iterations (iteration = 1, 50, 3000), where red dot represents the value of the first two dimensions for the individual. At the end of the optimization, all the individuals are clustered into a local optima (global optima), which is represented by a red fivepointed star. From the contour map of F28, we can find that all points assemble between and in the beginning of iteration, but in the end, population converges to a global optimum near point . This indicates that CGBO has the probability to jump out of the local optimum when it is trapped into a local optimum; in other words, CGBO has excellent exploration ability.
5.5. Parameter Optimization of DNM
The earliest singlelayer perceptron only considered the function and structure of a single neuron [57], and it could only solve linear problems, whereas synapses were nonlinear. According to these theories, a single neuron model DNM was proposed and successfully applied to many realworld problems, for instance, liver disorder analysis, breast cancer classification, and financial time series prediction [58, 59].
DNM has four layers which include synaptic layer, dendrite layer, membrane layer, and soma layer. The detailed structure of DNM is shown in Figure 9. The input data are calculated by a modified sigmoid function in the synaptic layer, which normalizes the original data. The dendrite layer and membrane layer are, respectively, the multiplication operator and accumulation operator. It is worth noting that the multiplication operator can be equivalent to logic AND and the accumulation operator approximates logic OR when input data correspond to 0 or 1. Finally, the soma layer includes a sigmoid function to receive the entire output of the neuron.
DNM is a neural network with a simple structure, which fully embodies the characteristics of a real neuron [60, 61]. A usable neural network structure cannot be achieved without an excellent training algorithm, and we can find that a simple DNM has only two parameters that need to be trained including weight and threshold . In allusion to different classification or prediction problems, the input data are distinct and the structure of DNM also changes accordingly. The number of inputs on dendrite layer corresponds to features number, and the number of dendrites equals the number of individuals in metaheuristic algorithms.
According to the above mechanism, we can take the advantage of our proposed CGBO algorithm to train DNM. Data sets involve Iris, Liver, Mackey Glass, Cancer, and Wine are selected to test the performance of the posttraining DNM. Table 3 lists the best accuracy of DNM optimized by CGBO, GBO, and other three wellknown training algorithms [62, 63] under 30 running times.
Here, each data set is divided into 50 percent test set and 50 percent train set, and each bold numerical value represents the best accuracy of them. CGBO apparently performs superior to other competitive algorithms. From the tables and graphs above, we can conclude that CGBO has excellent performance for function optimization or parameter training in terms of effectiveness and fault tolerance.
5.6. Time Complexity
Time complexity qualitatively describes the running time of the algorithm. An exponential time complexity will explode as the number of input data increases, which requires enormous computer resources, so the time complexity is also a significant indicator to evaluate the performance of the algorithm.
Here, we compare the original algorithm GBO with the improved algorithm CGBO. Firstly, we take into account what these two algorithms have in common. The time complexity for procedure of initialization is , and the GSR strategy traverses each dimension, whose time complexity is is generally much less than . In addition, LEO costs and the CLS contains two parts; the time to generate chaotic sequences needs and chaotic search process takes . Obviously, in neither GBO nor CGBO, the total time complexity can be summed up as , which is rare and commendable for an algorithm with excellent computational performance. In the real machine demo, CGBO has precious less time consumption except for function evaluation. It is worth noting that the time complexity of competitor algorithms GLPSO, OLBSO, DNLGSA, JADE, and CGWO are all in the worst situation.
6. Conclusions
In this paper, we incorporate an improved CLS into a metaheuristic algorithm GBO and propose an upgraded version CGBO. Different from a single chaotic map adopted in the past, we randomly selected different chaotic mapping functions in order to allow full play to the randomness and ergodicity of chaotic sequences. Moreover, this mechanism produces more biased data, which are more regular than pseudorandom numbers and more random than density functions such as Gaussian functions. To comprehensively testify the robustness and effectiveness of CGBO, we choose thirty singleobjective CEC2017 benchmark optimization function and parameter optimization of DNM to test performance of CGBO. We selected several stateoftheart algorithms that performed well for comparison. Two experimental results demonstrate that our proposed CGBO performs well for optimization problem. In our future plan, we attempt to establish a brand new and perfect adaptive chaotic search mechanism, and then, we develop a more competitive algorithm for solving some burning engineering optimization problems.
Data Availability
The benchmark data of IEEE CEC2017 can be found at https://github.com/PNSuganthan/CEC2017BoundContrained/blob/master/Definitions%20of%20%20CEC2017%20benchmark%20suite%20final%20version%20updated.pdf.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Authors’ Contributions
Hang Yu was involved in conceptualization, methodology, software provision, original draft preparation, and review and editing. Yu Zhang was responsible for visualization, investigation, validation, original draft preparation, and review and editing. Pengxing Cai provided software and performed investigation and validation. Junyan Yi and Shi Wang carried out conceptualization, methodology, and review and editing. Sheng Li conducted methodology and review and editing.
Acknowledgments
This research was partially supported by the Scientific Research Project of Beijing Municipal Education Commission (No. KM202010016011) and the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Nos. 19KJB520059, 19KJB520057, and 19KJB520058).