Abstract

As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems, and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems. Recently, the physics-informed neural networks (PINNs) have gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with an improved fully connected neural network structure, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders and time-step sizes of the Radau IIA method affect the accuracy of neural network solutions. For different time-step sizes, the experimental results indicate that utilizing a 5th-order Radau IIA method in the PINN achieves a high level of system accuracy and stability. Specifically, the absolute errors for all differential variables remain as low as , and the absolute errors for algebraic variables are maintained at . Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems.

1. Introduction

The concept of differential algebraic equations (DAEs) was formally proposed by Gear in the study of network analysis and continuous system simulation problems [1]. Petzold made it explicit through her study of numerical methods that DAEs are not ordinary differential equations (ODEs) [2]. DAE systems are composed of coupled ODE systems and algebraic equation systems with physical significance. These systems encompass both differential and algebraic variables, and their system form is more generalized compared to traditional ODE systems. DAEs have gained significant attention since their inception, as they can accurately describe systems that some ODEs cannot represent. They have found extensive applications in various fields, including fluid dynamics, multi-body dynamics, electronic circuits, mechanical systems, control theory, and chemical engineering.

In different developmental periods and research fields, DAEs are also known as singular systems, general systems, descriptor systems, or constrained systems, among other names. They often exhibit various structural forms, such as linear DAEs, nonlinear DAEs, semi-explicit DAEs, implicit DAEs, and Hessenberg-type DAEs. Fortunately, in practical physical modeling, most of the system models obtained are either low-index DAEs or high-index () Hessenberg-type DAEs [3]. The index of DAEs measures the “distance” between DAEs and ODEs. Generally, a higher index implies greater difficulty in transforming DAEs into ODEs or in directly solving DAEs using ODE numerical methods. Traditional numerical methods for solving DAE systems include implicit Runge–Kutta methods [4], BDF methods [5], pseudospectral methods [6], Adomian decomposition method [7], exponential integrators [8], generalized- methods [9], and Lie group methods [1012]. It is worth noting that these direct numerical methods can solve DAEs with an index of 1. However, for high-index DAE systems, these methods are only applicable to a certain class of DAEs and may result in varying order reduction of numerical accuracy.

With the rapid advancement of neural network technology and hardware resources, neural networks demonstrate increasingly powerful capabilities. Compared to traditional numerical computing methods, neural networks offer several advantages, including strong generalization, fault tolerance, and the ability for parallel computation. In 1998, Lagaris et al. [13] approximated solutions to ODEs or PDEs problems by constructing parameterized trial functions. These trial functions consist of two parts: one part satisfies initial conditions or boundary conditions which do not contain trainable parameters, while the other part is a simple feed forward neural network with trainable parameters. In 2019, Raissi et al. [14] introduced an important technique known as physics-informed neural networks (PINNs) for the numerical approximation of partial differential equation (PDE) problems. The PINN loss function includes not only initial or boundary conditions that reflect physical properties but also a residual term at selected points in the time-space domain where the PDEs hold. It is worth noting that PINN is a data-driven approach that does not require prior knowledge of the analytical form of the solution; instead, it learns the solution from data. Various variants of PINN have been proposed based on different collocation methods, such as variational hp-VPINN [15] and conservative PINN (CPINN) [16]. Specifically, Wu et al. [17] recently introduced an innovative PINN designed to address Hausdorff derivative Poisson equations across irregular domains. This method leverages the Hausdorff fractal derivative to reformulate the numerical resolution of partial differential equations into an optimization challenge, encompassing the principal equation and its boundary conditions. Notably, this technique is characterized by its simplicity, clarity, and programming convenience. Additionally, PINN has been widely applied to solve problems in various fields, including fluid dynamics [1821], seismic wave prediction [22], and optical problems [23].

In recent years, many researchers have attempted to construct neural network models from different perspectives to solve various types of DAEs systems influenced by these methods. For Hessenberg-DAEs with control variables and an index of 3, Kozlov and Tiumentsev [24] achieved the implementation of BDFs method using a semi-empirical neural network model. Zhao et al. [25] constructed a single-layer feed-forward neural network (FFNN) to solve Hessenberg-type DAEs systems. They augmented the loss function in their special Euler–Lagrange equation system with penalty terms for algebraic equations to avoid drifting in the results. Experimental results in their paper showed that the FFNN method with the Sigmoid activation function provided approximate analytical solutions close to the numerical solutions of corresponding Runge–Kutta methods, but they did not provide further details about the method’s accuracy. For linear DAEs systems, Liu et al. [26] selected Jacobi polynomials as activation functions and constructed a single-hidden-layer feed-forward neural network (JNN). They determined the network parameters using the classical ELM algorithm. Through experimental comparisons with other approximation methods such as Padé approximation, ADM method, and Adams methods, they illustrated the feasibility and superiority of the JNN method. It is worth noting that the examples in the paper involve DAEs with an index of 1 or linear DAEs that have been reduced to index 1. For DAEs systems with an index of 1, Moya and Lin [27] proposed a neural network architecture called DAE-PINN based on the PINN method for solving DAEs systems. This neural network model is a discrete-time model based on the implicit Runge–Kutta method, which can directly address most index-1 differential-algebraic equation problems. However, it cannot solve high-index DAEs problems and suffers from low accuracy issues. To address the high-accuracy computation challenges in high-index DAEs systems, we have combined the Radau IIA numerical method with an improved fully connected neural network. We have proposed a PINN computational framework based on the Radau method. Furthermore, we have improved the efficiency and accuracy of the solution by applying a strategy of domain decomposition.

In Section 2, we briefly introduce the fundamental concepts of DAEs systems, the Radau IIA numerical method, the improved fully connected neural network, and the discrete-time model of PINN. Building upon this foundation, we provide a detailed construction of the PINN computing framework based on the Radau IIA method. Additionally, we employ a time domain decomposition strategy for neural network. Section 3 uses the neural network designed in this paper to solve two high-index DAEs systems, and we analyze the solving accuracy of this neural network. Finally, we discuss and summarize the advantages, challenges, and potential avenues for improvement in the Radau-PINN architecture.

2. Scientific Machine Learning Methods

This section first sequentially introduces the basic concepts of DAEs and the classical Radau IIA numerical method. Then, we introduce an improved fully connected neural network architecture and the discrete-time model of PINN. Building upon this, we construct a PINN based on the Radau IIA method. Finally, we enhance the efficiency and accuracy of neural network solutions for DAEs systems by utilizing the concept of domain decomposition.

2.1. Radau IIA Method for DAE Systems

This article first provides a brief introduction to DAEs with an index of 2, with the specific form as follows:where is the differential function variable, is the algebraic function variable, , is the initial time point, and is the initial value. Both and are sufficiently smooth, and the Jacobian matrix is non-singular.

The Radau IIA method is a class of implicit Runge–Kutta methods, typically defined in the following general form:where , , is the step size, is the current step number, are parameters, and , .

In Table 1, different sets of parameters lead to different implicit Runge–Kutta methods, such as commonly used Gauss method, Radau method, and Lobatto method. These parameters are determined using Gauss polynomials, Radau polynomials, and Lobatto polynomials, respectively. Among them, the Radau IIA method is a high-precision numerical method with excellent numerical stability. Therefore, in this paper, the Radau IIA method is chosen, and the parameters need to satisfy the following conditions:and , , .

2.2. Improved Fully Connected Neural Network Structure

Building upon the DAE-PINN structure, we employ adaptive activation functions (4) to train an improved fully connected neural network structure. The specifics are as follows:

The architecture of the improved fully connected neural network model is primarily constructed using two transformer networks, denoted as U and R, to build two stacked-layer networks, as illustrated in Figure 1. Both neural networks map the input variable (differential function variable ) to a high-dimensional feature space. Subsequently, each hidden layer forms new residual connections using element-wise multiplication operations, as expressed below:where represents the input vector of the neural network, is the collection of weights for the -th neuron in the -th layer, denotes the set of biases for the -th neuron in the -th layer, is the activation function, represents element-wise multiplication, indicates the number of hidden layers (the depth of the neural network), is the final output vector of the neural network, is a predetermined hyper-parameter that ensures the slope is greater than 1, and is a parameter that can modify the slope of the activation function.

2.3. Discrete-Time Model of PINN

In this section, we will provide a detailed introduction to the PINN for discrete-time model as outlined in [14].

The main idea of the discrete-time model in PINN is to integrate neural networks with traditional Runge–Kutta methods. Considering the general form of PDE, it can be expressed as follows:where denotes the solution of PDE, is a nonlinear differential operator, and is a subset of .

Substituting the general form of the -stage Runge-Kutta method into the above PDE (5), we can obtain as follows:

Here, for , is the step size. For ease of writing and comprehension, equation (6) above can be equivalently represented as follows:where

The output layer of the PINN for discrete-time models has a number of neurons equal to the stage of the Runge–Kutta method plus one . The output layer is described as follows:

By integrating the Runge–Kutta formulas (8) and the output values of the PINN for discrete-time model (9), we can derive a neural network architecture with as input andas output.

In the discrete-time model of a PINN, the time axis is typically divided into several time steps. Besides spatial coordinates, the neural network’s input also includes information about the current time step. This enables the neural network to learn temporal dynamics and predict the solution at each time step. Based on the PINN’s discrete-time model, we have constructed a PINN architecture based on the Radau IIA method for solving high-index DAEs, which will be described in the following section.

2.4. PINN Based on Radau IIA Method

In this section, we use the discrete-time model of PINN as the foundation, incorporating an improved fully connected neural network. We have constructed a PINN architecture based on the Radau IIA method, as illustrated in Figure 2.

Firstly, construct a neural network with multiple inputs and multiple outputs, where the inputs consist of the collection of differential variables , and outputs

The first values of represent intermediate differential variables, and the first values of represent intermediate algebraic variables, where .

Secondly, based on the structure of the DAEs system with an index of 2 and the characteristics of the Radau IIA method, we further design an improved fully connected neural network for both the differential variable part and the algebraic variable part of the system. There are two specific design approaches: one assigns a single neural network to all differential variables and two neural networks to the algebraic variables, and the other assigns a separate neural network to each differential variable while keeping the algebraic variable part unchanged. In the case of the algebraic variable part, one of the neural networks is used to predict the first values, while the other neural network is used to predict the -th value. Theoretically, the second method, illustrated in Figure 2, involves creating a separate neural network for each differential or algebraic variable. This approach differs from the first by effectively increasing the size of the network. Additionally, it offers the benefit of avoiding the negative impact on other variables that could result from unsuccessful parameter optimization of a single variable during training. As a result, this enhances both the precision and the ability of the model to generalize. Through further testing, the second approach’s training results are more precise than those of the first approach, consistent with the expected results. As a result, all subsequent experiments in this paper are implemented based on the second approach.

Thirdly, based on the designed network structure, this paper constructs the loss function as follows:(i) is the loss related to the differential network and is expressed as follows:(ii) is the loss associated with the algebraic network and can be expressed as follows:(iii) is the loss related to the last value of the controlled algebraic variable and can be expressed as follows:where represents the loss weight for the differential neural network, is the weight for the algebraic neural network, and signifies the weight for the control of algebraic variable prediction neural network. The parameters and are specific to the Radau IIA method. is the total number of samples, is the number of training samples in the current batch, and denotes the neural network parameters. Here, represents the differential network, and represents the algebraic network. corresponds to the sample data of the model, stands for the values of intermediate differential variables, and signifies the values of intermediate algebraic variables. Furthermore, represents the output values of the differential neural network. The notation refers to the square of the L2 norm, represents the final output of the algebraic neural network, and denotes the penultimate output of the algebraic neural network.

Finally, we use gradient descent to solve for the weights, biases, and other parameters of the PINN,

2.5. Time-Domain Decomposition of Neural Networks

In this section, based on an analysis of the existing limitations of the PINN architecture, we adopt a time-domain decomposition strategy using neural networks.

One limitation of the PINN model is that it exhibits relatively low accuracy in predicting solutions. This is because the inherent inaccuracies involved in solving high-dimensional non-convex optimization problems can lead to local minima, making it challenging to achieve absolute errors below . Another evident limitation is the high training cost [16]. Similarly, our proposed PINN model based on the Radau IIA method may encounter similar issues. Furthermore, the iterative format of the Radau IIA method does not fully exploit its high-precision advantages during training.

To address these issues, we propose a time-domain decomposition strategy for neural networks, as illustrated in Figure 3. With this approach, we partition the original problem into segments, enhancing solution accuracy while leveraging iterative training. In other words, the predicted values from the previous time segment can serve as input values for the subsequent segment. This means that knowing the data values at the initial point for the first segment is sufficient to iteratively compute the solutions over the entire time domain. This approach significantly reduces the amount of required data. Specifically, only the data at the initial point for a set of differential variables, denoted as , are needed. Using the time-domain decomposition structure, we can iteratively determine the desired values within the range . If we can obtain the initial values for each network at every time segment, parallel training of each neural network becomes possible, significantly reducing the model training time.

3. Numerical Experiments

In this section, we apply PINN based on the Radau IIA method to solve two high-index DAEs systems separately and further investigate the influence of the order of the Radau IIA method on the solution results. The experiments were conducted on a Windows 10 operating system with an Intel(R) Core(TM) i7-10875H CPU @2.30 GHz processor. We used Python 3.9 software and coded the neural network architecture using PyTorch 1.12.1, the GPU version. Additionally, this paper involves two formulas to measure the accuracy of the experiments. One is the commonly used absolute error (AE) formula, defined as , which reflects the magnitude of the deviation between the neural network’s predicted solution and the true solution. The other metric is the mean absolute error (MAE) formula, defined as , used to assess the differences in accuracy among different orders of the Radau IIA method.

3.1. Hessenberg-Type DAEs System

In this section, we explore classical Hessenberg-type DAE systems with an index of 2 that possess exact analytical solutions [12], as follows:where , and the initial values . The functions , , , represent differential variables, while is an algebraic variable. The system’s exact solution expressions are , , , , and .

Firstly, we consider the impact of different orders of Radau IIA methods, including 3rd, 5th, 9th, and 13th orders (corresponding to ), on the precision of neural network solutions. Secondly, we explore the influence of activation functions on PINN. Common activation functions for hidden layers include Sigmoid, TanH, Sin, and ReLu, among others. When solving smoothly continuous systems, ReLu is generally not chosen; instead, Sigmoid, TanH, or Sin activation functions are preferred. In the experiments, Sigmoid resulted in better approximate solutions. Within this neural network framework, the initial values of the differential variables, namely, , , , and for each time segment, are used as a dataset for training. The step size is 0.05, which means that each time interval has a length of 0.05. Each network model in every time segment comprises 5 hidden layers, with each hidden layer containing 100 neurons. Sigmoid is used as the activation function, and the Adam optimizer is applied for 100,000 iterations. The experimental results within the time interval of 0 to 1 are presented in Figure 4.

From Figure 4, it is evident that the accuracy of the mean absolute errors for the 3rd and 13th-order Radau IIA methods corresponds to the blue Y-axis, while the accuracy of the average absolute errors for the 5th and 9th-order Radau methods corresponds to the red Y-axis. For all the differential function variables, the 3rd and 13th-order Radau IIA methods exhibit significantly higher average absolute errors compared to the 5th and 9th-order methods. For the algebraic variable , the 13th-order Radau IIA method has notably higher average absolute errors than the 3rd, 5th, and 9th-order methods.

Additionally, we further observe that for all differential function variables from red Y-axis, the 9th-order Radau IIA method’s overall trend in average absolute errors is significantly higher than the 5th-order method. For the algebraic variable , the 9th-order Radau IIA method exhibits notably higher average absolute errors than the 5th-order method. In other words, the 5th-order Radau IIA-based PINN achieves the highest precision in terms of average absolute errors.

The absolute error results obtained using the 5th-order method are shown in Figure 5. The accuracy of the absolute errors for , , and corresponds to the blue Y-axis, while the accuracy of the absolute errors for and corresponds to the red Y-axis. From the figure, it is evident that the neural network’s predicted values for all four differential variables have their lowest precision of absolute errors maintained at the order of , while the lowest precision of absolute errors for the algebraic variable is kept at . The experimental results suggest that the neural network’s predicted solutions have reached a high level of accuracy.

For the neural network structure designed in this paper, the predicted values of the differential variables can be used as the initial values for the next time step’s network input dataset. The precision of the differential variables can affect the results of the next time step’s network. In this context, the precision of the differential variables and is already at the order of , and the precision of the differential variables and is at the order of , which will not significantly affect the precision of the next time step.

The key parameters controlling the performance of our algorithm are the total number of Radau IIA stages and the time-step size . We consider different orders (3, 5, 9, 13, corresponding to  = 2, 3, 5, 7) of the Radau IIA method at various time steps ( = 0.025, 0.05, 0.1) to investigate their impact on the precision of neural network solutions. Throughout the experimental process, we maintain a fixed network architecture with 5 hidden layers, each containing 100 neurons, while varying the stages and time-step size of the Radau IIA method.

From Tables 2 and 3, the bold values in the table represent the optimal accuracy achieved at each time step. we summarize the average absolute error precision of solving the differential and algebraic function variables of the Hessenberg-type DAEs system using Radau IIA method with different stages and time-step size. Specifically, we can clearly observe that when the time-step is set to 0.025, 0.05, or 0.1, the Radau IIA method with a stage of 3 consistently produces accurate results. Furthermore, for the Radau IIA method with a stage of 3 and a time-step size of 0.05, the neural network solution achieves optimal accuracy. Additionally, the overall trend of the average absolute error for the 9th order ( = 5) Radau IIA method is similar to the 5th order ( = 3) method.

For the differential function variables, the Radau IIA methods of four different orders demonstrate consistently stable accuracy in average absolute error across various time steps. However, for algebraic variables, compared to the 5th and 9th-order methods, the 3rd and 13th-order Radau IIA methods exhibit significantly higher average absolute errors across three different time steps. When using higher-order Radau IIA methods with smaller time steps, there is no doubt that it will enhance the convergence speed of the neural network. However, it also increases the demand for a larger neural network scale. These adjustments require tuning in experiments. Nevertheless, overall, the 5th-order ( = 3) Radau IIA method proves to be the most stable. This characteristic is evident in the Hessenberg-DAEs system discussed below, where its numerical stability remains uncompromised. This makes the 5th-order ( = 3) Radau IIA method an ideal choice for solving stiff problems.

3.2. DAE System of the Pendulum Model

In this section, we study the classical pendulum DAEs system with an index of 2, as follows:where , and the parameters and are variable parameters, both set to 1 in the experiments of this section. The initial values are . In this context, , , , and are differential function variables, while is an algebraic function variable. This DAEs system does not have an exact analytical expression. In this paper, we directly solve the reduced inner ODEs of this system using high-precision ODE solvers from the Python scientific computing library Scipy and compare the obtained approximate solution with the predicted values from the neural network. Similarly, we consider the impact of different orders (3, 5, 9, 13, corresponding to ) in the Radau IIA methods on the accuracy of the neural network’s solutions. Secondly, we explore the effect of activation functions on PINN. In this experiment, the Sin activation function provides a better approximation. To maintain consistency in the numerical experiments, other network structural information is consistent with the experiments in the previous section. The results obtained are shown in Figure 6.

From Figure 6, we can observe that the accuracy of the mean absolute errors for the 3rd and 13th-order Radau IIA methods corresponds to the blue Y-axis, while the accuracy of the average absolute errors for the 5th and 9th-order Radau methods corresponds to the red Y-axis. For the differential function variables , , and , the 3rd-order Radau IIA method has significantly higher average absolute errors than the 5th, 9th, and 13th-order methods. For the differential function variable , the 13th-order Radau IIA method exhibits significantly higher average absolute errors than the 3rd, 5th, and 9th-order methods. For the algebraic variable , the 3rd and 13th-order Radau IIA methods have significantly higher average absolute errors compared to the 5th and 9th-order methods.

Additionally, we further observe that for all differential function variables from red Y-axis, the 9th-order Radau IIA method’s average absolute error overall trends similarly to the 5th-order method. For the algebraic variable , the 9th-order Radau IIA method exhibits significantly higher average absolute errors in the later time regions compared to the 5th-order method. In other words, a PINN based on the 5th-order Radau IIA method achieves the highest precision in terms of average absolute errors.

The absolute error results obtained using the 5th-order method are shown in Figure 7. The accuracy of the absolute errors for , , , and corresponds to the blue Y-axis, while the accuracy of the absolute errors for corresponds to the red Y-axis. From the figure, we can see that the lowest precision of absolute errors for all four differential variables is maintained at , while the lowest precision of absolute errors for the algebraic variable is kept at . The experimental results suggest that the neural network’s predicted solutions for the pendulum’s DAEs system can also achieve high precision.

Similarly, we also consider the impact of different orders (3, 5, 9, 13, corresponding to  = 2, 3, 5, 7) in the Radau IIA method at various time steps ( = 0.025, 0.05, 0.1) on the precision of the neural network solutions for the pendulum DAEs system. From Tables 4 and 5, the bold values in the table represent the optimal accuracy achieved at each time step. Clearly, the overall trends in the mean absolute errors for both the differential and algebraic function variables of the 5th-order ( = 3) and 9th-order ( = 5) Radau IIA methods consistently remain within the same order of magnitude. Additionally, the solving accuracy of these two orders of Radau IIA methods remains highly stable regardless of the chosen time step value.

In general, the results of solving the pendulum DAEs system using Radau IIA methods with different orders and time steps are consistent with the results obtained in the solution of the Hessenberg-type DAEs system mentioned above.

4. Summary and Conclusions

DAE systems are widely employed in various domains, including fluid dynamics, multi-body dynamics, and control theory. In practical physical modeling, most DAE models are either low-index DAEs or high-index Hessenberg-type DAEs. Classical implicit numerical methods are suitable for a certain class of high-index DAEs, but they often lead to varying order reduction of numerical accuracy. Recently, a novel neural network method, DAE-PINN, has been developed for solving low-index DAEs. However, it cannot directly handle high-index systems. Therefore, this paper proposes a PINN-based approach using the Radau method to solve high-index DAEs systems. This method combines the strengths of the Radau IIA method with an improved fully connected neural network structure and employs a time-domain decomposition strategy to enhance both efficiency and accuracy in solving these systems.

In this paper, two high-index systems, namely, Hessenberg-type DAEs and pendulum model DAEs, are studied as examples. The research takes into account the influence of different orders and time-step sizes in the Radau IIA methods and the activation functions on the accuracy of neural network solutions. Generally, employing higher-order Radau IIA methods enhances the neural network’s generalization capability. However, through comparative experiments with two examples, it is found that employing a 5th-order Radau IIA method in the PINN yields a high degree of system accuracy and stability for varying time-step sizes. This conclusion is consistent with the notion that Radau-5 is a high-precision numerical method [3]. Further experimental results indicate that in high-index systems, the absolute errors for all differential variables maintain a minimum precision of , while the absolute errors for algebraic variables maintain a minimum precision of . This method’s numerical accuracy surpasses the corresponding results in the literature [25] and, to some extent, surpasses the accuracy achieved by the DAE-PINN method [27]. This demonstrates that our method can directly and accurately solve high-index DAEs systems, showcasing strong generalization capabilities and offering a viable approach for high-precision solutions to even higher-index DAEs or challenging systems of partial differential algebraic equations. Furthermore, we have maintained the depth and width of the neural networks as in DAE-PINN [27] and have not delved into a detailed study of their impact on the accuracy of our method, which we will need to investigate in our future work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Disclosure

A preprint has previously been published [28]. This manuscript has been presented as a paper presentation of International Journal of Intelligent Systems.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The study was supported by the National Natural Science Foundation of China (Grant No. 12201144), the Guangdong Basic and Applied Basic Research Foundation of China (Grant No. 2020A1515110554), the Science and Technology Foundation of Guizhou Province (Grant No. QKHJC-ZK[2021]YB015) of China, and Chongqing Talents Plan Youth Top-Notch Project of China (Grant No. 2021000263).