Abstract

Driven by capital and Internet information (IT) technology, the operating scale and capital scale of modern industrial and commercial enterprises and various organizations have increased exponentially. At present, the manual-based financial work model has been unable to adapt to the changing speed of the modern business environment and the business rhythm of enterprises. All kinds of enterprises and organizations, especially large enterprises, urgently need to improve the operational efficiency of financial systems. By enhancing the integrity, timeliness, and synergy of financial information, it improves the comprehensiveness and ability of analyzing complex problems in financial analysis. It can cope with such rapid changes and help improve the financial management capabilities of enterprises. It provides more valuable decision-making guidance for business operations and reduces business risks. In recent years, the vigorous development of artificial intelligence technology has provided a feasible solution to meet the urgent needs of enterprises. Combining data mining, deep learning, image recognition, natural language processing, knowledge graph, human-computer interaction, intelligent decision-making, and other artificial intelligence technologies with IT technology to transform financial processes, it can significantly reduce the processing time of repetitive basic financial processes, reduce the dependence on manual accounting processing, and improve the work efficiency of the financial department. Through the autonomous analysis and decision-making of artificial intelligence, the intelligentization of financial management is realized, and more accurate and effective financial decision-making support is provided for enterprises. This paper studies the company’s intelligent financial reengineering process, so as to provide reference and reference for other enterprises to upgrade similar financial systems. The results of the analysis showed that at the level of , there was a significant difference in the mean between the two populations. When the value is in the range of -1 and 1, the linear relationship between the and variables is more obvious. This paper proposes decision-making suggestions and risk control early warning to the group decision-making body, or evaluates the financial impact of the group’s decision-making, and opens the road to financial intelligence.

1. Introduction

In the context of big data, it has become easier and faster for people to obtain data, but they still face the confusion of too much data and too little useful information. It filters and analyzes various types of data according to user needs and finally obtains information with decision-making reference value. In this context, data mining technology came into being. Big data helps people realize the desire to change all data and store it. Data mining technology can filter and extract useful information from a large amount of data, build an intelligent analysis system, and draw valuable decision-making conclusions. In particular, in recent years, many innovations in algorithm optimization and data modeling have promoted the rapid popularization and use of data mining technology in various disciplines and fields including financial analysis. The purpose of financial analysis is to scientifically and logically analyze and judge the operation and financial status of the business based on relevant business and accounting data. It ensures that the business is evaluated and supports high-level decision-making [1, 2]. Financial analysis began at the beginning of the 20th century, and after nearly a hundred years of development, it has developed from the initial loan credit evaluation to the main financial activities and decision support for enterprises. As technology and the economy continue to change, more and more data is collected and stored by businesses. The shortcomings of traditional financial analysis methods are gradually emerging, which makes the decision-making needs of modern enterprises more urgent and necessary.

Valuable data on internal business factors and the external business environment that influence the financial and operational health of a business becomes business intelligence. It enables business decision-makers to obtain information in a more timely and accurate manner, so as to make financial and strategic decisions that adapt to the current situation and improve the competitiveness of the business market. These are still topics worthy of further study. Based on the background of the era of big data, this paper analyzes the difficulties faced by the current traditional financial analysis. This paper builds a data mining financial analysis system based on data mining principles and explains in detail the existing problems, application process, evaluation, remaining problems, and system improvement skills.

The data mining platform is based on the financial analysis technology developed by the article, which not only conforms to the actual business needs of enterprises but also conforms to the development of accounting knowledge and has special theoretical and practical significance. Traditional financial analysis suffers from merger errors and delays. The development of big data and the application of data mining technology enable financial analysis to be carried out in real time, multidimensional, and fast. This paper proposes data mining technology to establish financial analysis system and promote real-time integration. It filters and analyzes internal and external business data, establishes a theoretical system of management accounting, and tracks the overall progress of management accounting. In the era of big data, enterprises are faced with data loss and difficult decision-making, providing managers with new decision-making ideas.

There are also many studies by world experts in data mining, financial big data management, and artificial intelligence. Xu looks at privacy issues related to data mining from a broader perspective and studies various methods that help protect sensitive information [3]. Chaurasia constructs a “universe” of more than 18,000 fundamental signals from financial statements and uses a bootstrap approach to assess the impact of data mining on fundamental-based anomalies [4]. Yan and Zheng proposed a new method to construct flood sensitivity maps by implementing fuzzy weight of evidence (fuzzy-WofE) and data mining methods [5]. Hruza identified how the financial management of municipalities developed in the period before, during, and after the crisis. And he based on secondary research and empirical research [6]. Youssef reviews the latest scientific achievements in applying artificial intelligence (AI) technology in photovoltaic (PV) systems. He investigated the role of artificial intelligence algorithms in the modeling, scaling, control, fault diagnosis, and output estimation of photovoltaic systems [7]. Lee believes that stroke medicine is one of the application areas of artificial intelligence to improve the accuracy of diagnosis and the quality of patient care. For stroke management, his adequate analysis of stroke imaging is crucial [8]. These studies provided us with some reference, but due to insufficient data, the trial scope was too small, so it was not adopted.

3. Application Mechanism of Artificial Intelligence and Data Mining

This paper compares the early warning effect of the “asset size” paired indicator system with that of the “number of employees” paired indicator system. This paper concludes that in the research on financial crisis early warning of small- and medium-sized enterprises, “asset size” is more representative of enterprise size than “number of employees.” This paper compares the results of experiments that consider only financial indicators with the results of experiments that incorporate the variables of “corporate governance” [9, 10]. The results show that the integration of “corporate governance” indicators has a positive effect on the prediction accuracy of SMEs’ financial crisis. This paper compares the early warning effect of the index system after RS reduction with the early warning effect of the index system before reduction and concludes that the RS method has a better attribute reduction effect. In this paper, the early warning effect of the artificial intelligence model is compared with the early warning effect of the logistic regression model. It is concluded that the research on financial crisis early warning of small- and medium-sized enterprises based on artificial intelligence method has a good prediction effect, as shown in Figure 1 [11].

As an important part of information discovery, data processing involves the process of extracting useful information from databases using computer algorithms, but does not include the collection and processing of raw data [10, 12]. In actual processing, data processing can be built-in, including built-in information or other objects, as shown in Figure 2 [13, 14].

A simple but complete data mining architecture is the process by which the data warehouse and other information systems work together. The function of a data warehouse is to filter, format, cleanse, and analyze data from other information systems or data collection tools. And this paper collects the processed data into the storage unit to facilitate the data mining system for value analysis. It should be noted that data warehouse and database are two completely different concepts. The former is taken from the latter. It is a subset of the first analyzed and processed dataset [15, 16].

As shown in Figure 3, before processing the data, the purpose of the problem must be solved; that is, the problem must be solved by the data processor, and then by the data processing experts using information and models to search the database algorithm. Finally, combining the samples with the information provided, the results can be used to generate useful information, such as reports. The diagram shows a simple but complete way of processing information. In a data processing coordinator, these steps include selecting and editing information, modeling and interpretation, and integrating modeling. It is basically a cycle; the first step is to provide information, including reviewing information, understanding information distribution, removing abnormal information or noise, and correcting incomplete information. And the quality of information preparation directly affects information processing. The second step is to build the most important building blocks, including finding the right data mining algorithms and deployment tools, choosing metrics, and creating relevant models. The third step is to review and compare what has been done and then make corrections and other corrections until there is an example that matches user expectations. In this regard, selection behavior is measured and can be applied if the results are satisfactory. The final step is to integrate and integrate the models. The results of the information processing services are ultimately used in the decision-making process.

As far as SMEs are concerned, their primary feature is their small scale, which is manifested in the fact that the number of employees, total assets, or operating scale is relatively small compared to large enterprises. This type of enterprise can usually be funded by one person or several people. Therefore, the investment amount is small, the construction period is short, the effect is fast, the decision-making power is concentrated, and the number of employees and the daily turnover of the enterprise are usually small, as shown in Table 1 [17, 18].

The group’s financial decision support system is committed to realizing the decision support model of “standardized reporting, intelligent decision-making, and information networking,” providing financial information support for the group. With the help of the ETL database development interface, the system can extract the original data required for enterprise finance from the original information sources such as NC-ERP system, network reporting system, and Excel table. After data cleaning and transformation, the data will be loaded into the big data platform and then classified and summarized in different dimensions to form a big data warehouse. With the help of financial analysis and decision-making models, these classified and stored data are deeply processed to meet the financial decision-making needs of management. The system can automatically generate visual business indicators, manage the cockpit and analysis reports, etc., as shown in Figure 4 [19].

According to the basic process of index screening, the article first conducts preliminary screening of sample indicators under the matching standard of “same industry, similar asset size.” Due to space limitations, only the specific inspection process in T-4 years is listed in this paper. First, the normality test is performed on the primary financial indicators, using SPSS to do K-S on primary financial indicators and the test results are shown in Table 2 [20, 21].

Table 2 shows that asset-liability ratio (), return on assets A (), net interest rate on total assets (), return on equity (), operating net interest rate (), operating profit rate before interest and tax (the eight indicators of ), sustainable growth rate (), and net cash content of operating income () generally conform to a normal distribution. In this paper, two independent samples -test is used to test whether the mean difference of these 8 indicators in the two populations is significant. The test results are arranged in Table 3 [22, 23].

The experiment uses two laptops, one for simulating IoT sensing devices and the other for simulating fog devices. For the convenience of description, the laptop as a fog device is called no. 1, and the laptop as a perception device is called no. 2. Both laptops use Ubuntu Kylin 19.04 operating system. No. 1 turns on the hotspot to provide network connection for no. 2. Deploy the Docker container in no. 1, and deploy the code of the hidden neuron in the container. For testing purposes, deploy the Docker container in no. 2 as well, and deploy the code of the input neuron in the container. This experiment needs to test bandwidth and delay. iperf is a network measurement tool. It can test TCP and UDP bandwidth quality. So it is measured by using iperf. Computer no. 2 sends data to computer no. 1 every 15 seconds, a total of 20 times. The measured transmission rate is shown in Figure 5 [24, 25].

According to the data shown in Figure 5, the average bandwidth of the communication link between the sensing device and the fog device is calculated to be 7.625 Mbps. In an experiment, the sensor collects data every 10 seconds and sends it to the cloud, a total of 20 times, and the average bandwidth of the communication link between the sensor and the cloud is measured to be 3.125 Mbps [26]. According to this, it can be concluded that the system proposed in this paper has expanded bandwidth compared with the system that sends sensor data to the cloud for neural network model construction and inference. In order to measure the delay, 6 Docker containers are deployed in no. 2 to simulate 6 sensing devices to send data. Computer no. 2 uses 1, 2, 3, 4, 5, and 6 Docker containers to send data to no. 1 at the same time [27]. Record the time delay from when data is sent to when feedback is received. In some experiments, the sensor data is sent to the cloud, and 1, 2, 3, 4, 5, and 6 sensors are used to collect data and simultaneously send data to the cloud and record the delay. According to the experimental and test results, the delay comparison between the sensing device sending data to the fog device and the cloud is shown in Figure 6 [28].

The standard BP algorithm is derived as follows [29, 30]:

The network objective function represents

The recurrence relationship between neuron outputs is as follows:

The error definition can be expressed by the following formula:

Then, the output of the th neuron in the hidden layer is

The threshold change of the th neuron in the hidden layer is

4. Empirical Research on Enterprise Financial Risk Analysis

In order to improve the efficiency of empirical research, this paper mainly focuses on the research on some Chinese listed companies. The quarterly and corporate annual report data required for the research process are all sourced from a financial website.

The accuracy of the research results depends to a large extent on the selected sample companies. When analyzing the financial risk of enterprises, this paper mainly starts with the data from 2016 to 2021 and takes 20 listed companies as the research object of the article. What these companies have in common is that they have all experienced loss crisis, and they are all caused by financial risks, all of which are very typical. Please refer to Table 4 for details on the company name and stock code.

The operation of removing abnormal variables is the process of cleaning the sample data. If these abnormal data are not excluded, the accuracy of financial risk analysis will be disturbed, which will restrict the smooth development of follow-up research. Therefore, it is necessary to analyze all the sample values in depth, screen and exclude some abnormal data in time, and then start the discretization data reconstruction operation. Figure 7 lists the relevant values of the financial indicators of these 20 listed companies before the restructuring operation.

The main reason why the database reconstruction operation is carried out is that the financial index data actually possessed by the enterprise is continuous, which does not meet the requirements of the interactive mining method based on association rules. Therefore, it is necessary to give full play to the guiding role of the definition of financial risk level to transform it into discrete sample data. At this time, the specific situation of the sample value can refer to Figure 8.

The focus of the research is that, guided by the interactive mining method based on association rules, combined with the actual situation, the confidence threshold and support threshold are set pertinently. In this paper, the parameters closely related to financial risk indicators, such as the number of rules and frequent patterns, are accurately displayed. This paper also explores the interaction mechanism of all financial index systems, guides the essential reasons for the existence of financial risk crises in enterprises, and provides ideas for the prevention of enterprise risk crises. Algorithms describe some repetitive computational behaviors that sometimes occur in interactive mining operations. This is due to the reduced support threshold. Therefore, the mining algorithm used in this subject is based on the mining information already mastered, and the frequent itemsets generated by it also belong to a new support threshold. The storage method is hash structure, which also achieves the retrieval operation of counting the support of frequent itemsets. It makes the overall mining efficiency significantly improved.

The update operation of frequent pattern sets is not suitable for two situations; one is when the support threshold has an increasing trend, and the other is when the confidence threshold has fluctuations. Based on this situation, it can be intuitively found that the speed advantage of IUA and HIUA is more obvious at this time. Therefore, this paper only considers the situation when the support degree has a downward trend and compares and analyzes the time spent by each algorithm in the calculation of frequent pattern sets. The specific running time is shown in Figure 9.

On the basis of the principles and overall structure of data mining applied to financial analysis, this paper proposes that data mining should be applied to specific processes. The first step of the process is to determine the problems to be analyzed according to the needs of operation and management and to determine the required internal and external information from the problems. The data mining financial analysis platform extracts the required data according to the requirements and enters the data warehouse after preprocessing such as cleaning and conversion, so as to prepare for the second step of analysis and mining. The data of the data warehouse needs to be processed by the analytical model before it can be converted into the knowledge required by the information users. This is also the core link of data mining. The data mining process advocated by DRISP-DM can go through a complete cycle at this stage. The third step is data output, which can be divided into active output and passive output. Active output means that the data mining financial analysis platform actively pushes the results, prompts, and early warning information to users according to the preset user needs. Passive output means that the platform responds and responds to users according to the retrieval or request instructions of managers and information users.

As can be seen from Figure 10, in this system, there are three crucial subjects, namely, data warehouse, model library, and data mining expert library. The data in the data warehouse is preprocessed by cleaning, screening, repairing, etc., to solve the problem of data proportion. Most of the data at the business level of the data warehouse is transformed into a global data view, which constitutes the database of all data mining financial analysis systems. The database model is the basis of decision support, and the analysis performs quantitative analysis on the request. Ultimately, data mining collects data from databases and data warehouses, puts it in the knowledge base of the expert system, and draws final conclusions from the expert system, inferring the knowledge that can be generated.

5. Discussion

This paper examines the current application status and key aspects of the current monetary policy resource system with the help of survey research methods, literature research methods, and quantitative analysis methods and how to analyze cases. In this paper, artificial intelligence is created based on this data for organizational financial planning, system mechanism, and execution tracking, and finally, with the help of cases, suggestions are made for the construction and application of the new system in practice. Through the research, the following conclusions are drawn.

The application of financial decision support system can have a positive impact on the financial decision-making of enterprises, including making financial analysis more comprehensive and accurate, improving the comprehensiveness of useful information for decision-making, and improving the timeliness of financial decision-making. However, the existing system is not widely used, and its functions are not comprehensive enough to meet all the decision-making needs of decision-makers. This situation arises mainly because the existing systems generally have the problems of low degree of intelligence of the system, high cost of system construction and operation, insufficient timeliness of supporting decision-making, and poor decision-making effect. By realizing the extensive application of artificial intelligence technology in the financial decision support system, the financial decision support system under artificial intelligence can be constructed. The application of the new system can provide decision-makers with more comprehensive and accurate decision-making useful information. And in the case of ensuring the principle of cost and benefit, it expands the scope of application of financial decision support and improves the scientificity and objectivity of financial decision-making. It reduces the probability of irrational decision-making, thereby improving the overall financial decision-making quality of the enterprise and ensuring the long-term stable and healthy development of the enterprise. It should be emphasized that, as the initiator of financial decision-making tasks, the proposer of financial decision-making goals, and the final reviewer of financial decision-making, the decision-maker is always the leader of financial decision-making. The new system is built to support financial decision-making, not to replace humans in making financial decisions.

In terms of system construction, the new system consists of three departments: the Information Agency Department, the Analysis Department, and the Communication Department, which are responsible for financial decision-making and decision-making planning. Artificial intelligence technology is used for decision-making; first, the new system mines and organizes the data stored in each database, downloads different images, and stores them in the database. And it carries out portrait matching, so as to get the financial decision-making scheme. Subsequently, the scheme can be modified and improved with the help of human-computer interaction to form the final scheme. In the construction of the implementation path, the implementation of the new system should have a certain environmental foundation, including the construction of basic business and financial systems, the construction of data warehouses, and the introduction and training of relevant talents. At the same time, it should also improve the supporting authorization and accountability system to ensure the safe and smooth operation of the new system. For different types of problems, different decision customization paths are adopted. Regular questions are logically clear and relatively simple. It mainly relies on the independent decision-making of the new system to improve efficiency and reduce costs. Complex problems are highly unstructured, which has a significant impact on the enterprise and requires a high degree of human-machine collaboration. Human-machine collaboration is realized through human interaction activities. At the same time, in each decision-making process, the new system is constantly evaluating the principles of financial decision-making, such as target matching and process compliance, so as to achieve self-learning and improve decision-making quality. Combined with the case study of the group, this paper argues that the existing system should further expand the application scope of artificial intelligence technology. It extensively collects useful information for decision-making, improves financial analysis and decision-making models, realizes the normalization and autonomy of financial analysis, forecasting, and decision-making, and enhances human-computer interaction to improve decision-making quality. In addition, it can also consider the application of cloud data warehouse to further reduce costs. At the same time, large group enterprises should pay attention to the necessity of building a new system, follow the principles of step-by-step and cost-benefit in the process of building, and pay attention to the influence of human factors on the application effect of the new system.

6. Conclusion

At present, the capital market is in the stage of rapid development, which aggravates the competition pressure of enterprises to a large extent. Therefore, strengthening enterprise risk management has become an important task for enterprises. In view of the many shortcomings of traditional statistical analysis methods, such as the assumption that it is difficult to deal with a large amount of information, the dynamic changes of financial indicators are not understood. Around these problems, this paper proposes a new method of financial risk analysis based on bilateral alliance rules. The article covers the following aspects of the first research effort: a detailed publication of data mining methods in the emergence phase of development. It can be said that the supporting reliability model has laid a solid foundation for the creation of all existing mining algorithms in the organization code. Given the amount of data provided, organizational rules, and algorithms, the linking of fixed objects varies with limited reliability and support variability. The integration of the HIUA mining algorithm proposed in this paper is based on the hash structure. Different from the standard IUA algorithm, it effectively solves the algorithm defects in the printing stage. In addition, the number of supports is always an important factor limiting the speed of the algorithm. With the help of the hash structure, it improves the access speed and strictly guarantees the algorithm operation. In the process of transferring financial risk to the company, this paper develops a new merged mining algorithm based on organizational rules. It enhances the company’s own ability to resist financial risks. Using as many as 20 locally listed companies as research tools, the article discusses their financial risks from a research perspective and provides a comprehensive overview of the mining benefits of both. At the same time, the target mining site is also constructed based on the mining algorithm, which realizes the flexible control of finite value and the definition of regular number graph and time efficiency graph. Through the shortcomings in the research process, the next stage should focus on the improvement of the following aspects: it will continue to carry out in-depth exploration of data mining technology and improve the mining efficiency to a new height so that the relevant needs of some large-scale database mining operations can be effectively met. It continuously expands data mining methods related to financial risk analysis and crisis early warning mechanism and measures their respective value advantages in later practical applications.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.