Abstract
In recent years, with the increase in computing power, the sharp drop in costs, and the successful use of data management technology, a large amount of data has been rapidly spread and stored in various fields of the company. How can we passively find active data and form active knowledge from these big data information, know how to use equipment to quickly and accurately obtain high-quality information, use the obtained information to guide users in decision-making, and provide more economic and social benefits? This paper focuses on the study of the classifier model based on BP neural network, and the combination of BP neural network model and other optimization algorithms, including genetic algorithm (GA), particle swarm algorithm (PSO), Adaboost algorithm, GA, and PSO have global search performance. It is mostly used to optimize the weight threshold of the network and the number of hidden layer nodes. The Adaboost algorithm builds an enhanced classifier based on the idea of integration. At present, data mining technology has moved from the laboratory research stage to the commercialization stage. The use of widely owned knowledge and information as analysis tools can be used in many fields: such as financial analysis, engineering design, scientific research, management, and production control. At the end of this paper, the improved Adaboost_BP classifier is used, and the result proves that the efficiency of hotel management has increased by at least 75%.
1. Introduction
With the development of information technology, people’s ability to produce and collect data has been greatly improved. Correspondingly, the ability of data analysis and knowledge acquisition is relatively lagging. From data collection, database creation, and data management to advanced data analysis technology, data mining (data mining) technology emerges and develops accordingly. Data mining is a nontrivial process of identifying effective, novel, useful, and understandable patterns from data sets or databases. Classification mining is one of the important applications of data mining. The classification ability is realized by constructing a classifier. Its construction methods include statistical methods, machine learning methods, and artificial neural network methods. The pattern that the neural network can recognize is determined by the network topology, connection weights, and node thresholds. Therefore, the optimization methods for neural network models are mainly divided into optimizing the network topology and optimizing the weight threshold of the network.
Wang Li uses artificial neural network mapping rules and data mining methods to perform data mining on GIS alumni results and uses SQL Server data mining services to define closing plan results and various courses, professional GIS courses and closing plan results, and for professional closing plans Data mining model. B. Grade applies the effect of data mining to the design of the GIS vocational education plan. The results show that the learning outcomes of geographic information system (GIS) graduates are more affected by professional courses and computer courses. The performance of GIS graduates is affected by remote testing. Digital imaging, the design and development of GIS courses, and other hands-on courses have a greater impact. In the new round of vocational education and training courses review, computer courses, digital imaging with remote sensing functions, and hands-on time for GIS design and development courses have been added. However, the system can only be copied by a single machine, and data cannot be shared at any time [1]. Mengjie believes that with the use of the Internet, online teaching has changed the traditional teaching methods, making the network fast and convenient without being restricted by time and space, reducing teaching costs, and improving teaching quality. Network (neural network) and data mining technology (data mining) completed the recommendation of online training courses. The online course recommendation mechanism he researched and proposed can be divided into two parts: preprocessing online course recommendation and processing online course recommendation. The course recommendation mechanism is applied to online learning recommended by online homeschools, to verify the feasibility and applicability of the study. However, this mechanism has not been widely promoted, and it will take some time to implement it [2]. Xinping takes data mining neural network classification algorithm as the main research object, adopts classification application experiment and bibliographic research methods, and chooses three artificial neural network algorithms, namely, neural network algorithm VM, data mining algorithm, and ELM algorithm. After comparing the experimental results, it can be seen that different data mining classification algorithms have their own different advantages. In the sorting experiment of corn seeds and red wine, the ELM algorithm used in the neural network has the best sorting results, its accuracy is more than 83%, and the modeling time is the shortest. He believes that the only choice is to combine specific data volume conditions. The data mining classification algorithm can effectively improve the accuracy and efficiency of data mining. However, the experimental data is too small to effectively explain the experimental results, and continuous improvement is needed [3].
The empirical analysis of this article is to apply the classifier model based on BP neural network to the field of financial analysis and establish a financial crisis early warning model for listed companies. Through the data preparation phase, the classifier modeling phase, the mining phase, and the interpretation and analysis phase, the data mining method is applied to the financial early warning system to build a classifier suitable for financial early warning, and through a single BP classifier, Adaboost_BP classifier and the comparison of the classification results of the test samples of the improved Adaboost_BP classifier show that the data using the improved version of the classifier has improved the work efficiency by nearly 75%, which further verifies the effectiveness of the Adaboost algorithm and the improved algorithm.
2. Constructing Classifiers and Data Mining Techniques and Methods
Methods for cataloging catalogers include statistical methods, machine learning, neural network methods, and so on. Statistical methods include Bayesian classification algorithms and nonparametric methods (such as evidence-based learning), in which relevant knowledge is used as the target or basis. Among them, machine learning includes the decision tree method and rule induction method, and then, in the case of reflecting the production rules, it is basically reverse. The application of long-distance network model is very advanced. In addition, there are rough set (Rough Set) methods, support vector machines (SVM), etc. [4, 5].
2.1. Bayesian Classification Algorithms
The Bayesian classification algorithm is a classification method of statistics. This algorithm is used for statistics. This algorithm is carried out with probability statistics. Generally, the selection algorithm of Bayesian classification can match the selection tree and neural network category and can be used in large databases. The method used is simple, high, and fast in a specific category [6].
2.2. Nonparametric Methods
From the question of extrapolation, we know the exact form of the distribution (especially when the normal distribution is assumed), and we only need to make an estimate or assume that it is an unknown parameter. But we usually know little or nothing about the distribution form of the two (such as asymmetric mode or uneven asymmetry). At this time, statistical extrapolation that does not rely on (or at least not) the cumulative distribution is usually called a nonparametric method [7, 8].
2.3. Decision Tree Analysis Method
Decision tree analysis is a possible comparison with probability and graph theory. Decision tree is a risk-centric approach to make decisions in order to obtain the best algorithm. The trees on the plane are horizontal, centered on 0, centered on zero, and centered on zero. The decision tree includes the root (decision node), other internal nodes (program nodes, stage nodes), leaves (target nodes), branches (line segments, probability), probability factors, and cost-benefit factors [9].
2.4. Rule Induction
The basic idea of induction is to list a few special situations, analyze a few limited special situations, and finally determine the overall situation of the relationship. However, it is not an easy task to generalize practical problems, and there are often no certain rules in the process of induction.
To build a catalog, you need a training memory containing test subject data. The training area includes a set of databases or metadata, and each satisfactory carrier is composed of features or features. In addition, the training model can also represent elements. The form of a specific sample can be (), where represents the field value and represents the category [10, 11].
3. Research Experiment on Optimization Method of BP Neural Network Classifier
Starting the network, the number of nodes in the network input module is specified by the system output (), the number of nodes specified with or without the system, and the system-based entry system ( [12]). This analogy illustrates the connection between the original input layer of the input layer and the energy wave that it carries.
Includes implicit deduction of output, the input quantity is displayed, and the numerical value between the input value and the hidden value corresponds, and [13] is calculated from the input value.
In the formula, the number of hidden layer nodes; f is the hidden layer activation function, which has many expressions, and the commonly used activation functions are as follows [14]. Step function:
Piecewise linear function [15]:
The most commonly used sigmoid function is the formula [16, 17], and there is also a commonly used hyperbolic tangent symmetric function, such as the formula
The parameter of the controllable slope:
The output layer output calculation. According to the hidden layer output , connect the weight and the threshold , and calculate the network output [18].
Error calculation, according to the network prediction output and the expected output , the network prediction error is calculated [19].
The weight is updated. Update the network connection weight , according to the network prediction error [20, 21].
Initialization has randomly given the input layer to the hidden layer connection weight and the hidden layer to the output layer connection weight , as well as the hidden layer threshold , and the output layer threshold
If the average error of the system is greater than the allowable error value, the new connection weight and threshold are calculated [22], the calculation formula is as follows:
The greater the learning rate , the greater the amount of weight change, which can speed up the network training process [23, 24]. However, the result may produce oscillations. In order to increase the learning rate without causing oscillations, a momentum term is added to the formula, namely,
is the variable of the th hidden layer node, which is a standardized constant, or base width [25]. The output of the th node in the output layer of the network is a linear combination of the outputs of the hidden layer nodes
After the clustering is completed, start training the neural network NNm used to calculate the membership of the fuzzy rules and calculate the membership of each output. The training samples of each neural network are composed of multiple inputs and j outputs [26, 27]. It is defined as follows: if a sample in the original training sample set is clustered into the th group, the output part corresponding to the sample is
Note that although the neural network gets one-tenth of the training, the following network will use the characteristics of the Seamer function to correctly apply the rules to all other functions [28]. It is also because the Sigmoid function cannot completely take 1/0 value, so it is replaced by 0.9/0.1, which can speed up the training speed of the network [29, 30]. After deciding on the training samples, you can train the network.
4. GA Optimized BP Neural Network Model
4.1. Principle of GA Algorithm
A genetic algorithm is a parallel way to optimize and compare nature’s genetic mechanism through biological evolution. It introduces the biological evolutionary principle of “survival of the fittest and survival of the fittest” in nature into the coded serial group formed by optimized parameters. The role of selection media, as well as the selection of genes, nonpatents, and mutations, ensures that well-trained individuals are preserved and cultivation fails of people died, and new groups were given information from the previous and first generations. Repeat the test until the conditions are met.
As shown in Figure 1, GA adjusts and optimizes the BP neural network. It will be discovered in part of the interaction between genetic algorithms and neural networks and finally reached the best communication and critical value and used this as a benchmark for tpp network simulation and prediction. If GA does not optimize the neural network, it will be a genetic algorithm that enables it to optimize the initial weights and thresholds of the neural network. The optimization of the neural network connected with the genetic algorithm includes the main population improvement, adaptive function, selection operation, abnormal operation, and crossover operation.

The basis of the genetic algorithm is as follows: genome coding method, function, genetic manipulation, and surgical parameters. This genetic coding technology is a coding technology for individuals. Now includes binary or mathematical formulas. Binary is to use code to repeat and effectively adjust the period. Function refers to the function of calculating the personal fitness value based on the calculation of the evolution result and the calculation of the function that can be selected.
The genetic algorithm is an optimization method for ordinary neural networks. If the neural function of BP is considered to be a predictable function, then when the genetic algorithm optimizes the neural network, it is like a parameter. The optimized network predictive ability is generally better than the network before optimization. However, the algorithm is limited. It can only enhance the data accuracy of the existing neural network, but not enough to optimize the current neural network with high neural errors. In particular, there are some prediction errors because of the small sample size and uneven distribution as shown in Table 1.
4.2. PSO Optimized BP Neural Network Model
Similar to the genetic algorithm, the particle swarm optimization (genetic algorithms) optimization algorithm can also be divided into optimized network initial weight threshold and optimized network structure, which is consistent with the previous article. This section mainly discusses the PSO optimization of the initial weight and threshold of the BP network algorithm. PSO is a swarm intelligence optimization algorithm in the field of intelligent computing. The PSO algorithm studies the behavior of predators. The easiest and most effective way for any bird to find food is to find the area closest to them.
The PSO algorithm first initializes a group of particles in the solvable space, and each particle represents a potential optimal solution of the optimization problem, which is represented by the three characteristics of position, speed, and fitness value. The fitness value indicates the pros and cons of the solution and is calculated by the fitness function.
As shown in Figure 2, the algorithm for optimizing particles is an algorithm based on the theory of collective intelligence, which guides the search for particles through information about the particles. Compared with the GA algorithm, PSO includes a search strategy based on time tracking, but it uses fast and replacement models to avoid complex genetic behaviors.

Generally speaking, PSO searches for the optimal solution faster than GA, as shown in Table 2.
Project the individual’s body into a network full of energy to form a neural network. Please enter the training sample of the corresponding neural network. Optimizing Internet rights is a process of trial and error. In order to ensure that there are more storage rooms, the sample space on the trained neural network is often divided into two parts as training rooms or test fields. When measuring the source code, a sampling survey is used to ensure that the test results of each exercise are different.
The diagonal points listed in the training documents will also be calculated, and then, their respective adaptation periods will be set so that individual workers can act.
4.3. Adaboost_BP Neural Network Model
Different from the GA and PSO optimization algorithms discussed in the previous two sections, the Adaboost algorithm enhances the classification effect of the BP classifier through iteration and combination. Boosting algorithm is a typical ensemble learning algorithm based on resampling technology and is used to better solve classification problems. Therefore, Boosting has been well developed and widely used in solving classification problems. Its core idea is when training new classifiers, which pay more attention to the training books that are difficult to classify correctly. In 1995, Freund and Schapire proposed the Adaboost algorithm, which is easy to practically apply, which is an improvement to the Boosting algorithm. It is mostly used and the problem of two classifications.
On the basis of the single BP classifier in the previous section, a BP classifier is regarded as a “weak classifier,” and multiple weak classifiers are integrated through the Adaboost algorithm. A strengthened classifier composed of 2 to 20 weak classifiers of a single BP neural network was trained sequentially.
As shown in Figure 3, the first view is to combine the output of different “weak” categories to generate effective categories. We assume that BP is a weak cataloging unit. We have trained a large number of methods to measure input and built a powerful classic format composed of weak recurrent neural networks in Adaboost to improve the accuracy and reliability of the category. Adaboost is a very high-precision classifier, suitable for two-classification problems and multiclassification problems. One disadvantage of this type of ensemble algorithm is that it is sensitive to noise; in addition, when there are too many wild points in the training data, it is easy to cause serious overtraining, which is manifested as a sharp expansion of weights on a small number of samples, which ultimately leads to the effectiveness of the classifier.

Through self-organizing neural network to obtain data, independent organizations organize learning data based on part of their own creation of the organizational model in order to obtain data with salient features, such as classification by feature, a neural network. Just like neurons in the human brain, each cell is unique. The interaction between humans and humans allows people to test the data gaps of these signals connected to false neural networks through adjustments. However, the research on neural networks is becoming more and more intense. Cannot give the best explanation for the results obtained from the download. The data acquisition system based on the opaque neural network is not so easy to lose the results, which increases the stability of the system, as shown in Table 3.
For the neural network model, it can only improve its fit as much as possible, but it is impossible to achieve 100% accuracy. All errors and errors are inevitable. The classification accuracy rate is the most important indicator to measure the classifier. For the financial crisis early warning model, the classification accuracy rate needs to consider two situations. One is that when the financial situation is actually healthy, it is predicted to be in crisis. The second is that it is predicted to be healthy when there is a financial crisis. The former generally does not have a large impact on the company and the enterprise and is beneficial and harmless, but the latter is relatively risky and loses its early warning function for management decision-makers and delays the opportunity for crisis management. We define the former as the first type of error and the latter as the second type of error. When evaluating a financial early warning model, it is necessary to consider both the overall misclassification rate and minimize the second type of error. When the error rate of the second category is 0, it means that no company or enterprise in crisis has been misjudged. This is the classification result that the early warning model strives to achieve.
In this round of testing, the relationship between the number of weak classifiers and the error rate is shown in Figure 4. From the test results, the classification error rate of the strong classifier integrated with 12 weak classifiers is the smallest.

After 12 sets of tests on a strong classifier with 20 BP neural networks, the average classification error rate is 0.1236, which is 1.46 percentage points lower than that of a single BP classifier, which has a certain optimization effect. The classification error rate of the first type is 0.07518, and the classification error rate of the second type is 0.1437. It shows that the integrated strong classifier optimizes the classification ability of the samples of financially normal companies, but there is no significant change in the classification ability of the samples of financial crisis companies, but the overall classification ability has been improved to a certain extent, and the optimization effect has been achieved. The second type of classification error rate has not increased significantly because of the following reasons: first, the financial data characteristics of the financially normal companies are relatively high, and it is easier to extract the characteristics of the indicator data, while most of the companies in the financial crisis have different characteristics. It has its own characteristics and is not easy to extract. The second is the uneven distribution of the number of companies in the financial crisis. For example, there are more ST companies in the manufacturing industry than in other industries. The third is that the second type of error rate of the BP classifier is inherently high. If the samples of crisis companies are not fully trained, it will be difficult to improve the classification accuracy. Therefore, the ability to classify a sample of companies in the financial crisis has not been significantly improved.
As shown in Figure 5, from the above data, the average error rate of the first type is 0.1020, the error rate of the second type is 0.0872, and the overall error rate is 0.0922. This is an increase of 2.14 percentage points over the classifier before the improvement. It shows that by improving the algorithm to increase the emphasis on the samples of financial crisis companies and adaptively adjusting the weight of the misjudgment samples, it can significantly improve the classification error of the second category, the comparison curve graph of the error rate of the first type, and the error rate of the second type in the 10 sets of tests.

First, take out the various characteristics. When these attributes are higher than the threshold, you will be calculated in the higher dimensions corresponding to the numbered data in the random forest. This information will also be filtered out, using the ant algorithm to filter attributes. First, the parameters are initialized, and an action is accidentally generated at the beginning of the iteration. When the action starts to look for, it will choose one at random. Each action should be calculated according to each specific situation, until the result is satisfied. The system will stop the search and redistribute a pheromone to create a complete action again, reproduce the offspring to complete the iteration, and get the best results.
As shown in Figure 6, for a two-category problem, we generally give priority to one of the two types in the sample, followed by the other. The weight update rule of the standard Adaboost algorithm only updates the weight of a sample based on whether it is misclassified and does not focus on training a certain type of sample. In the actual situation, although the overall classification error meets the target requirements, for one category, the error may be higher. If this type of misjudgment brings a higher risk, it is not suitable for practical problems. In response to this problem, this paper proposes a sample adaptive weighting algorithm based on the degree of importance.

For this reason, the algorithm can compare the errors in the positive and wrong samples with the matching algorithm, so that it can be classified into the structure of the sample. First, determine the importance of the classification accuracy of the positive and negative sample classes and divide the boundary of the weight distribution according to the degree ratio. If the detection rate of positive samples is highly concerned, it means that the FNR value is required to be relatively small. If the value of positive samples can be added, each weak sample will be given the power to strengthen the ability to group positron infections.
It can be seen from the previous experiment that the second type of error is smaller than the first type of error. In other words, the number of companies that are in the financial crisis is judged to be normal is more than that of companies that are financially normal. The cost of the first type of error is much higher than that of the second type of error. The risk of misjudgment is very high. According to the sample adaptive weighting algorithm based on the degree of importance proposed in Chapter 3, the FNR and FPR indicators can be, respectively, corresponded to the probability of two types of errors.
As shown in Figure 7, the financial crisis of the hotel is not a sudden event, but a continuous process, from the time of the crisis to the completion of the hotel’s bankruptcy and liquidation, at a moment of the financial crisis. In the past, through statistical particle analysis, main component analysis, factor analysis, chronology, etc., methods for studying and predicting corporate financial crises have been continuously developed. Among them, statistical analysis has always been a good way to study and predict whether a company will have a financial crisis. The application of artificial neural networks to hotel financial early warning is a research direction in recent years. This paper applies the BP classifier to the classification and mining of financial data, establishes a hotel financial early warning model, and improves the algorithm to improve the accuracy of the BP category, as shown in Table 4.

The hotel financial early warning system provides scientific decision-making knowledge support for decision-makers by comprehensively evaluating and predicting indicators of financial status, trends, and company changes. The indicators for evaluating financial crises include a series of indicators. If all indicators are evaluated and integrated, the model is too complicated and not conducive to the accuracy of the forecast. Therefore, firstly, you need to screen the indicators and select the ones that have a greater impact on the identification of the category.
This paper selects 30 financial indicator variables from the six aspects of hotel solvency, per share indicators, earnings quality, profitability, operating capacity, and capital structure, as well as various indicators. The 30 major indicators also include subindices of the current period, the same period last year, or year-on-year growth rate. In this article, mathematical statistics are used to screen indicator variables, so first of all 90 financial data variables are included in the scope of investigation.
Data digitization is a processing method of digitizing data to selected data. Between [0,1], the difference in the amount of data has been corrected, and the original data must be converted to avoid making higher and different estimates, because the indicators are the single digits are different. There are two main ways of collecting data: minimum and medium deviations and functional formulas.
As shown in Figure 8, the initial power of the network determines the initial location of the network error by selecting the corresponding source code, so its impact is to greatly shorten the duration of network training and improve the pertinence of training. Then, in order to define the neural network, it automatically sets the right threshold of the hotel company and the threshold value [-1, 1]. It is also possible to adopt a trial method and use the weight and threshold that are the most initialized with the weight threshold with the smallest one-time error. In this paper, the single BP classifier uses the random initial value method in matlab.

Determining the number of hidden neurons is the key issue. This is a fairly traditional method. The number of subconscious neurons is equal to the input vector; this means that when the vector value is too high, the number of hidden units does not get enough accuracy and speed. In order to promote adaptation to climate change, a new and better method has been adopted. The basic principle is to train small neurons, which automatically increase the number of neurons on the network by constantly checking the output. In each cycle, the input quantity is equivalent to the total value and may lead to a new neuron layer input quantity, so the error rate of the new network is studied. This process will be repeated in a repeated manner until the specified error or maximum potential nerve cell is reached. The structure of the RBF neural network, as discovered here, is not related to the initial value.
5. Conclusions
This research mainly involves researching data-based neural network methods, including permuting neural networks. The application of neural network to extract data was not recognized from the beginning, mainly because of its too large scale, too complicated structure, and too long learning time. It is also important to increase the resilience of acoustic data and improve training algorithms, especially in improving network-based algorithms and tampering rules, which makes neural networks more and more popular for collecting a wide range of user data. At present, the feedforward BP neural network is the most widely used network. This article mainly studies the method of constructing data mining classifier based on BP neural network, and the combination with a variety of optimization algorithms, and analyzes through actual cases. The constructed BP classifier is applied to the hotel management problem—evaluation index system. The evaluation index system helps the hotel management decision-making level to avoid and diversify risks early and effectively. Based on the theories of data mining and deep neural network, this paper studies the combination of the Adaboost algorithm and neural network and establishes a new indicator system through the significance test. When selecting indicators, it draws on the empirical research of domestic scholars. The selected 30 indicators more comprehensively express the financial situation of a hotel. There is certain relevance among these indicators. In order to include as much information as possible, this article includes 30 indicators into the early warning system. The research content of this article is still relatively simple. The main result is the application of BP neural network to the classifier., which uses the Adaboost algorithm to improve the BP classifier, proposes and verifies the optimization algorithm, and introduces the constructed BP classifier into the hotel management evaluation system. However, in the intersection of data mining, neural networks, and financial early warning, there are still many aspects that need to be studied in depth. I hope that this model can be further optimized and perfected in the follow-up research and exploration, and it can truly serve every business enterprise and firm. The supported system functions make my greatest contribution.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This study was supported by the Phased Achievements of the 2021 School-level Scientific Research Project (21XYJS023) of the China University of Labor Relations.