Abstract

Big data economy meets artificial intelligence, making the traditional statistical economy gradually evolve into an intelligent economy. Limited by human consciousness, traditional economic models have low prediction accuracy. In traditional statistical methods, the limited sample data also makes it impossible to effectively control and comprehensively forecast macroeconomic and development trends. Data economy has fundamentally transformed the traditional means of economic analysis. This is because the digital economy enables economic connectivity and precise data sharing, which can be used for precise economic statistics and mathematical analysis. Meanwhile, in terms of statistical methods, artificial intelligence methods no longer rely on human consciousness, but more objectively pay attention to economic cause and effect and are more accurate and comprehensive. This paper proposes an economic forecasting method based on artificial intelligence methods combined with big data analytics. In our model, we consider the economic statistics, equilibrium, and future prediction with the big data. Through the artificial intelligence method based on deep learning, the possible political factors, human activity factors, and social environmental factors in actual economic activities are effectively combined to form the main analysis subject affecting the economy. The results show that our model can be used as a basic model for economic statistics, economic analysis, economic decision-making, economic self-regulation, and other functions under the current development trend of the data economy.

1. Introduction

“Digital economy” was first proposed by American economist Don Tapscott in the 1990s [1]. Subsequently, the digital economy has attracted wide attention and the main research directions of scholars can be divided into three parts: (1) concept and connotation of the digital economy. (2) Proportion and development of the digital economy to each country's economy. (3) Research on digital economy theory. The main carrier of the digital economy is the digital, which is conducive to Internet sharing and dissemination. It is the transmissibility of data that has brought us into the digital age and created a new era product, big data. Big data covers the most raw and real details of each economic activity, which not only has a large number of samples but also has an accurate and detailed numerical description.

This information is very useful for economic analysis and forecasting. However, the traditional artificial economic forecasting method is difficult to carry out statistics and analysis on such a large number of data. The amount of information of these numbers far exceeds the scope of human work efficiency. In this context, the big data analytics method is born [2], which enables people to use artificial intelligence to analyze and count massive data quickly and efficiently [3]. Big data analysis methods based on artificial intelligence are widely applied in stocks analysis and prediction [4], industry analysis [5], capitalist economic development [6], climate warming [7], and program popularity prediction [8]. Big data analysis can not only carry out scientific statistics, analysis, and prediction of our economy but also guide decision-makers to formulate more reasonable economic policies and guidance. With the emergence of big data and the massive increase of data samples, the international social and economic factors are complicated. The traditional economic forecasting model has been unable to meet the development needs of the current digital economy. Therefore, this paper is devoted to the research of the economic modeling and economic forecasting based on artificial intelligence and big data analytics under the current situation of the digital economy.

The economic model refers to the theoretical structure used to describe the interdependence between economic variables related to the economic phenomenon under study. An economic model is an analytical method that describes the situation of the real world in a very simple way. The situation of the real world, made up of primary and secondary variables, is so complex that a rigorous analysis is impossible or rendered uncomplicated unless the secondary factors are excluded. By making certain assumptions, you can rule out a lot of secondary causes, so you can build a model. In this way, the special case specified by the hypothesis can be analyzed through the model. The economic model itself can be represented by equations with graphs or words [9, 10], such as the marginal analysis model and the decision-making process mathematical model. These classical methods can be expressed by explicit formulas, but the new artificial intelligence method can predict the implicit economic model by learning method, which can not be expressed explicitly, but the prediction is more accurate and does not need manual design.

Economic forecasting is based on the theoretical basis of economics, through a certain period of economic data and data calculation and analysis, through the relevant forecasting methods and technologies, so as to be able to study and analyze the economic development and changes. The goal of the economic analysis and forecasting is to calculate and analyze the economic data with a certain period of time in a qualitative or quantitative way to explore the rules of economic development and predict the development trend in the future. The traditional economic forecasting is affected by human factors more, more subjective forecast, and different people forecast results are often widely divergent. While, the artificial intelligence based economic forecasting pay more attention to the historical data and makes objective evaluation and prediction, which has more reference value.

Traditional economic modeling and forecasting rely heavily on manual model design and manual statistics, which cannot be applied to the massive data of the current digital economy. Moreover, the analysis of traditional methods is too subjective and lacks the statistics of massive samples and scientific subjective prediction. To this end, we propose economic modeling methods and economic forecasting methods based on big data analysis and artificial intelligence methods. Because of many influence factors to economic activity, each the influence of different factors on the economy, and at different times have different influence. It is also connected between each factor, therefore it is very tedious to relate these economic factors manually. To this end, we put forward a model based on a graph network economic analysis of the various factors. This graph network structure supports the simultaneous input of multiple factors and is automatically correlated by learning the weight of each factor. In order to make an economic forecast comprehensively, we also propose an economic forecasting model based on Long Short Term Memory (LSTM) [11], which can be used to forecast the future economic trend.

The rest of this paper is organized as follows: Section 2 introduces the related works with our research. The preliminaries of the proposed method are introduced in Section 3. In Section 4, we propose our method and introduce the details. Section 5 reports the experimental results. The final section will introduce the conclusion of our study.

In this section, we review some related works about economic situation prediction based on artificial intelligence methods including evolutionary algorithms, data mining, machine learning, and computer vision. In the following, we detail these methods.

2.1. Evolutionary-Algorithm-Based Economic Situation Prediction

Chen [12] collected data from 200 listed companies in Taiwan stock exchanges to compare the differences between traditional statistical methods and nontraditional statistical methods for predicting financial distress. Among them, are nontraditional methods such as decision tree classification, neural network, and evolutionary computing technology. Specifically, the author uses principal component analysis (PCA) technology to extract appropriate variables and has done a lot of experiments. The experimental results show that the traditional statistical methods can better deal with large data sets without sacrificing the prediction performance, while the traditional methods, that is, intelligent technology, can achieve better performance on small data sets but does not perform well on large data sets; In addition, experiments also show that particle swarm optimization (PSO) and support vector machine (PSO-SVM) can be combined to predict potential financial distress. Claveria et al. [13] proposed an empirical modeling method based on genetic programming, which realized the way of predicting economic growth from expected survey data. Specifically, the author uses an evolutionary algorithm to estimate the symbolic regression, which links the expectation based on the survey with the quantitative variables used as the measurement standard and deduces the mathematical function form similar to the target variables. The set of economic growth indicators generated from experience is used as a building block to predict the evolution of GDP. In addition, the author also used GDP estimates to evaluate the impact of the 2008 financial crisis on the expected accuracy of the evolution of economic activities. Claveria et al. [14] used the survey expectations of various economic variables to predict actual activities. The author proposes an empirical method to deduce the mathematical function form that links the survey expectation with economic growth. Specifically, the author combines symbolic regression with genetic programming to generate two survey based indicators: perception index, current evaluation and expectation index of using agents, and their expectations for the future. To find the best combination of these two indexes to best replicate the evolution of economic activities, the author uses a portfolio management program called index tracking. Through the generalized reduced gradient algorithm, the relative weights of the two indicators are derived. Hu et al. [15] made a systematic literature review on the discovery of inventory rules in e-commerce technology for the first time. The author divides them into three categories of analysis methods, including basic analysis, technical analysis, and hybrid analysis, and three categories of e-commerce technology, including evolutionary algorithm, swarm intelligence, and hybrid e-commerce technology. In the discovery of technology trading rules, there is an obvious bias between the application based on genetic algorithm and genetic programming technology. In addition, the author also investigates and reveals the research focus and gap of applying e-commerce technology to inventory rule discovery and puts forward the technical roadmap for future research. El-Henawy et al. [16] used the multilayer perceptron neural network to predict the stock index and used three search algorithms to obtain the best network structure and parameters, so as to improve the prediction accuracy of the model and reduce the training time of the model. Specifically, the author carried out the experiments of simulated annealing, genetic algorithm, and the hybrid method combining simulated annealing and genetic algorithm, compared these experimental results, and drew two conclusions: (1) in terms of accuracy, simulated annealing algorithm is the best algorithm, its accuracy is 40% higher than genetic algorithm and 30% higher than hybrid method. (2) In terms of training time, simulated annealing is the best algorithm, followed by the genetic algorithm, and the hybrid method takes the longest time. Mirowski [17] tried to separate and determine the depth of the transformation of economic concepts in recent economic research, focusing on five areas, including mechanism design, zero intelligence agents, market microstructure, engineering economics, and artificial intelligence. The authors claim that this shift can identify concerns about treating the market as a different algorithm and can have a far-reaching impact on the conceptual framework used to solve economic problems. Moreover, the author also designs an implicit alternative of evolutionary computational economics based on automata theory to put the problems existing in different markets at the research center.

2.2. Data-Mining-Based Economic Situation Prediction

As a subject of computer science, data mining has been widely used in the field of finance. As an important means of managing big data, enterprise efficiency and business intelligence, data mining, and machine learning are essential. Moreover, data mining is also of great value in the financial business.

Aiming at the problem that it is difficult to evaluate the going concern of companies, Koh and Low [18] put forward several going concern prediction models based on statistical methods to help accountants and auditors. Specifically, the author compares the availability of logistic regression, decision tree, and neural network in predicting the sustainable operation of enterprises. The experimental classification results show the potential availability of data mining technology in the continuous operation prediction environment. In addition, the decision tree continuous operation prediction model is better than the logistic regression and neural network models. Because data mining technology is very powerful for analyzing complex nonlinear and interactive relationships, traditional statistical methods can be used to supplement when building the going concern prediction model. Sung et al. [19] used data mining methods to develop bankruptcy prediction models that adapts to normal and crisis economic conditions. The model can be used to observe the dynamic changes of the model from the normal state to the crisis state and finally give the bankruptcy classification. The bankruptcy prediction model shows that under normal circumstances, the main variables predicting bankruptcy are total asset cash flow and capital productivity, while under crisis circumstances, they are liability cash flow, capital productivity, fixed assets, and shareholder’s equity. When the normal model is applied to the crisis situation, the prediction accuracy of bankruptcy classification will decline significantly. Therefore, the author concludes that it is reasonable to adopt different models under the condition of a crisis economy. Kunnathuvalappil and Hariharan [20] focused on the technical application of data mining in forecasting stock, managing portfolios, and analyzing investment risk, as well as identifying and predicting bankruptcy, foreign exchange rate, financial fraud, and other economic behavior prediction. Zhang and Zhou [21] described the achievements of data mining in financial forecasting from the perspective of technology and application. In addition, the author makes a comprehensive comparison of different data mining technologies and their achievements in different financial application fields. The author also gives the challenges faced by future research in this field and the future development trend. In order to evaluate the application of data mining in finance, such as financial prediction and classification, Kwak et al. [22] proposed a multicriteria linear programming method, which uses the existing bankruptcy data to predict the bankruptcy situation. The experimental results of this method show that the method proposed by the author has better prediction results than the traditional multiple discriminant analysis and logit analysis of financial data. Specifically, the overall prediction accuracy of the proposed model is similar to that of the decision tree and support vector machine. Aiming at the problem of finding the bankruptcy factors of small enterprises, Ptak-Chmielewska [23] designed a complex enterprise bankruptcy prediction model and studied whether the increase in the complexity of the model will improve the prediction efficiency. Specifically, the author analyzed the samples of 806 small enterprises and estimated some simple and complex models, such as logistic regression, gradient lifting, support vector machine, and so on. The experimental results show that the simple model and the complex model have the same effect in bankruptcy prediction. Because data mining models usually have fitting phenomenon, this paper analyzes the most important financial factors that predict the bankruptcy of small enterprises.

2.3. Machine-Learning-Based Economic Situation Prediction

Hasanuzzaman et al. [24] surveyed 130 papers on machine learning financial analysis from 1995 to 2010 and presented the development of the most advanced machine learning technology, including integrated classifiers and hybrid classifiers. The author also expounds on the advantages and disadvantages of the bankruptcy prediction and credit scoring model with machine learning. In terms of the factors that predict the bankruptcy of small enterprises, the author designs a machine learning based enterprise bankruptcy prediction model and studies whether the increase of the scale of the model will improve the prediction efficiency. The results show that the simple size model and the complex size model have similar effects in bankruptcy prediction. The price of financial assets is non-linear, dynamic, and chaotic. Forecasting the price model of the financial market is a subject with high demand and difficult to predict. Due to the high productivity of the machine learning field used to predict the prices of financial markets, Henrique et al. [25] review 57 texts and propose a classification model of markets, assets, methods, and variables. Among the machine learning prediction models mainly investigated, it is particularly noteworthy that more studies use data from the North American market. The most commonly used prediction models include support vector machines and neural networks. Obthong et al. [26] reviewed the research on machine learning models and algorithms for improving the accuracy of stock price prediction. Stock market trading is an activity that investors need fast and accurate information to make effective decisions. Many stocks are traded on the stock exchange, so there are a lot of factors that affect the stock decision-making process. Moreover, the stock pricing behavior is uncertain and difficult to predict. Therefore, the research on finding the most effective prediction model has arisen to realize that the prediction model can generate the most accurate prediction with the lowest error percentage. Huang and Yen [27] investigated a large number of recent work on machine learning models for predicting financial distress, including supervised, unsupervised, and mixed supervised unsupervised machine learning models. The prediction performance of four supervised models including traditional support vector machine, hybrid associative memory with translation, hybrid GA Fuzzy Clustering, and extreme gradient lifting are compared with that of unsupervised classifier deep belief network and hybrid dbn-svm model, and the actual social financial data set is used as the experimental data set for the comparative experiment. The experimental results show that xgboost provides the most accurate prediction of financial distress among the four monitoring algorithms. In addition, the hybrid dbn-svm model can generate more accurate predictions than SVM or classifier DBN alone. Financial crisis prediction is the most complex and promising problem for companies and small-scale enterprises. Vadlamudi [28] investigated how the newly adopted machine learning technology addresses this issue in all areas of private and public business. The author uses systematic literature evaluation to study the impact of machine learning on financial crisis prediction. Specifically, from the selected work, the author determines the main role of these methods in predicting bankruptcy and credit, such as data processing, data privacy, and confidentiality. The author also puts forward the main methods to realize the financial growth and plasticity of the company. Actively monitoring and assessing the economic health of financial institutions is the most basic work of the regulatory authorities. Petropoulos et al. [29] use a series of modeling techniques to predict the bankruptcies of American financial institutions. By comparing with the widely used bank failure model and other advanced machine learning models, the author concludes that the random forest method has excellent sample and time prediction performance, and the performance of the neural network is almost the same as that in the time sample. The author also shows through experiments that in its evaluation framework, the indicators related to income and capital constitute the factors with a high marginal contribution to bank failure prediction and evaluates the generalization of the machine learning model through case studies of major European bank samples.

2.4. Computer-Vision-Based Economic Situation Prediction

In most countries, reliable data on the socio-economic status of individuals, such as personal health index, household consumption expenditure, total household wealth, and assets, are still scarce. The traditional methods of collecting such data include field survey and questionnaire survey. Due to the high cost and labor-intensive nature of these surveys, it is difficult for them to be widely used at the national level. Remote sensing data, such as high-resolution satellite images, are widely used in many countries. In order to avoid the lack of high-grained socio-economic data, computer vision based on remote sensing data has been successfully applied to original satellite images sampled from resource poor countries [30]. The method of automatically counting fruit quantity plays an important role in agricultural crop management. Syal et al. [31] reviewed the previous studies on calculating the number of fruits on trees and their yield estimation and introduced various computer vision and optimization technologies to achieve automatic fruit counting. The author also summarizes the main advantages and disadvantages of existing systems in the field of agricultural automation. The results show that using k-means clustering or color based nearest neighbor classifier can provide color space transformation of RGB images for better classification results. After the color image is segmented, the circular fitting algorithm is applied to carry out the morphological operation, and the fruit on the tree can be separated and counted. The author uses the fruit recognition algorithm to extract various color and shape features of fruits and uses the classification based on fuzzy logic to count fruits. The authors claim that their method implements an efficient fruit counting and yield mapping algorithm. Breeding crop varieties with high economic benefits is of great significance to social stability and development. The purchase price usually reflects the economic benefits of crops. The traditional method of estimating economic benefits by purchasing price formula based on manual measurement features is very time-consuming and labor-consuming. Motion based structure combined with multiview stereo method can extract plant phenotypic features and estimate the economic benefits of crops efficiently and timely. Xiao et al. [32] developed a framework for obtaining phenotypic traits based on the calculation of non-linear formulas and partial least squares regression models to estimate the economic benefits of various genotypes of crops. Specifically, the author designs a low-cost portable device for acquiring multiview images of crops to facilitate subsequent three-dimensional reconstruction. Then, multiple characters are estimated from the reconstructed three-dimensional reconstruction model, and a model is constructed using the real world data set for prediction. The experimental results show that the proposed model can achieve the expected economic benefits. The appearance of the house, the neighbors around the house, and the incentive factors of the owner will have an impact on its price. Using computer vision technology, Glaeser et al. [33] found that some standard deviation improvements in residential appearance were related to the increase in residential value. Relative to the location and basic family variables, the additional predictive power generated by the image was small, but the external image was superior to the variables collected by the family assessor. When the visual prediction value of the neighbor increases, the value of the house increases. In addition, the author has not found the relationship between the appreciation or depreciation of the rental housing and its appearance, and the owner will not modify and upgrade the appearance of his residence before resale. Most of the textiles produced in the world are mainly polyester and cotton fibers. At the end of the textile life cycle, most textiles are currently buried or incinerated. Mäkelä et al. [34] aim to recycle these textiles effectively. The author discusses the application of hyperspectral near-infrared imaging technology to estimate the content of polyester fiber in textiles, aiming to develop a machine vision model for textile characterization and textile recycling. Specifically, the author first visualizes the differences in textile samples based on the principal component model and then uses the image regression algorithm to predict the polyester fiber content in a single image pixel. The experimental results of this method show that the average prediction error is 2.2%–4.5% in the range of polyester fiber content of 0-1, and the method can visualize the spatial change of polyester fiber content in textiles. Training robot arms to complete manual tasks in the real world has received more and more attention in the academic and industrial circles. Zuo et al. [35] investigated the role of computer vision algorithms in this field. All the authors’ decisions are based on visual recognition, such as real-time 3D pose estimation. In addition, the author also proposes an annotation scheme for a large number of training data, then creates a large number of synthetic data using a three-dimensional model, and trains a machine vision model in this virtual data domain to apply the model to the real image after the domain adaptation. Specifically, the author makes full use of the geometric constraints between key points, designs a semisupervised machine learning method, and uses an iterative algorithm to optimize the model. The author also constructs a task control system based on computer vision, which can train a reinforcement learning agent for the real world in the virtual environment.

3. Preliminaries

We introduce some preliminaries frequently use in the text of this paper.

3.1. Machine Learning

Machine learning is a concept, that is, without writing any specific code related to the problem, generic algorithms can tell some interesting conclusions about input data. Without coding, if input data into the generic algorithm, it will build its own logic based on the data. For example, there is an algorithm called the classification algorithm, which can divide data into different groups. The classification algorithm can be used to recognize handwritten digits (A machine learning model prediction example is presented in Figure 1 for predicting digital data); It can also be used to distinguish between spam and nonspam without modifying one line of code. If different training data are input to the same algorithm, it can get different classification logic. A machine learning algorithm is a black box, which can be used repeatedly in many different classification problems. “Machine learning” is an inclusive term that covers a large number of similar generic algorithms. There are two kinds of machine learning algorithms: supervised learning and unsupervised learning. It is simple to distinguish between the two, but it is also very important.

3.2. Deep Learning

Deep learning is a machine learning method. As an artificial neural network, it can independently construct (train) basic rules according to the sample data in the learning process. Especially in the field of machine vision, neural networks are usually trained by supervised learning, that is, by example data and predefined results of example data. How deep learning works? It can divide into four parts (Figure 2 presents the structure of the deep learning).

First, artificial neural network: deep learning uses some form of artificial neural network (ANN) technology, so it must be trained with sample data first. The trained ANN can be used to perform related tasks. The process of using trained ANN is called “inference.” In reasoning, Ann will evaluate the data provided according to the learned rules. For example, it is possible to evaluate whether an object in an input image has a defect.

Second, neurons, layers, and connections: Ann is composed of multiple interconnected “neuron” layers. In the simplest case, these layers specifically refer to the input layer and the output layer. Many neurons and connections can be considered as a matrix. The connection matrix contains each value of the input matrix and is connected to the value of the result matrix. The value of the connection matrix contains the weight of the corresponding connection. The corresponding values can be generated in the result matrix by means of the logic matrix values and the input value weights.

Third, deep artificial neural network: “Deep learning” refers to the training of deep Ann. In addition to the input layer and the output layer, there are hundreds of hidden layers for input and output between the visible layers of the depth Ann. The resulting matrix of each hidden layer is the input matrix of the next layer. Therefore, the results are provided only by the output matrix of the last layer.

Fourth, train: when training ANN, the initial focus is set randomly, and then the sample data is added step by step. The relationship weights should be adjusted according to the input data and the expected results and using the training rules. The final performance of the ANN (i.e., the accuracy of the result evaluation) depends largely on the example data used in the training. If the content used for training has a large amount of sample data with high variability, more accurate inference results can usually be generated. If a large number of very similar or repeated data are used for training, the ANN will not be able to estimate the field when encountering data different from the sample data. This situation is called the overfitting of ANN.

4. Method

The first thing to solve for economic forecasting is the economic modeling. Therefore, our proposed method consists of two parts. The first part is the economic model based on a graph neural network, and the second part is the economic forecast based on LSTM.

4.1. Economic Model

Considering the many factors affecting the economy, we propose a graph neural network as the basic structure of economic modeling. The graph structure is shown in Figure 3. The graph structure consists of nodes and the edges connected between them. As shown in the figure, the connections between these nodes have directional connections, and different edges have different weights. In economics, the weight of these edges is often affected by many factors at the same time. In the traditional economic modeling process, these weights are generally set manually. The weights of manual design are often set to fixed values according to experience. However, these weights may be time-limited and may change in practice, so the traditional model is not accurate.

In order to make the graph structure calculation more efficient, we propose an embedding feature matrix calculation method, which can solve the edge weight between each nodes. The matrix is a square matrix, and the horizontal and vertical index respectively represent the weights between corresponding nodes. Due to the large differences in the form of data between the various factors related to the economy, there are scalar and vector. Therefore, we first unify the data of different data forms into the same feature vectors using the feature embedding method. They have the same dimension, which is conducive to the calculation of the correlation between vectors as shown in Figure 4. The feature embedding is consist of a connected layers and a ReLU function. After each factor is changed into a feature vector, the weight of the matrix is solved by the similarity between vectors. The similarity calculation here uses the vector inner product.

4.2. Economic Forecasting

In order to make the economic forecast more accurate, we adopt the artificial intelligence method to model and analyze the forecasting model. The main reference points for the prediction of the model include historical information, historical results, and current information. Considering that the model needs historical information for prediction, we adopt the LSTM structure. In the LSTM structure, we use a multilayer perceptron to extract features from input data, fuse features and results through LSTM, and comprehensively predict the current results. The LSTM structure is shown in Figure 5. The notation is the output or real value at time . The notation is the input information at time , which include multiple factors such as mentioned in Section 4.1. The notation is a combination of operations, including thah activation function, and a multilayer perceptron. As shown in Figure 4, the prediction results at the current time can be expressed as follows:where is the translation weights between the last time result and the current time result. is the graph weight between the different economic factors at time , and is corresponding to the time .

Through the LSTM structure, we effectively comprehensively analyze and integrate historical results, historical information, and current information, which makes our network prediction more comprehensive and referential.

5. Results

We use the stock forecasting as an example to validate our economic modeling method and the forecasting model. As shown in Figure 6, the blue curve is the real stocks gains, while the red curve is the prediction results. We input the current news score, historical news scores and stock gains. The LSTM then predicts the output gain of the current time. The results show that the RMSE accuracy decreases from 21.5% to 10.3% after we input the relevant information into LSTM. This indicates that our proposed LSTM method can effectively fuse multimodal data to effectively interact with the current prediction results.

6. Conclusions

In this paper, we first analyze the characteristics of the current digital economy, as well as the status of traditional economic models and forecasts. Then, according to the current advantages and characteristics of artificial intelligence and big data, we put forward economic modeling methods and economic forecasting methods based on artificial and big data analysis. The proposed method fully considers the objective factors of economic development and the characteristics of multi-element influence and proposes the economic modeling method based on graph structure and the economic forecasting model based on LSTM, respectively. The experiment of stock forecasting shows that our method of economic modeling and economic forecasting is effective. In the future, we will optimize this paper from three aspects. At first, we exploit more neural network models to make an effective comparison. Then, we will build a real platform (such as a software) for forecasting economic conditions. Finally, we will adopt more data analysis models to improve the accuracy.

Data Availability

All data used to support the findings of the study is included within this paper.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This paper was supported by Social Science Foundation of Jilin Province (Grant no. 2022J51).