Abstract

This paper uses random matrix theory to construct a neural network model for business performance management. The random sample covariance matrix of the random monitoring matrix is constructed, and the maximum eigenvalue and the minimum eigenvalue of the sample covariance matrix are solved. The ratio of eigenvalues is used to construct the eigenvalue detection index and determine the eigenvalue index detection threshold algorithm to judge the abnormal state of enterprise operation. The data of 66 listed Internet finance companies are selected, normalized, and correlation tested, and the index weights of each level are obtained using hierarchical analysis to derive the expected output of the BP neural network. Finally, the constructed BP neural network performance evaluation model is used for network training and simulation analysis, in which 192 data of 48 companies in the last four years are selected as training samples and 8 companies in the last four years are used as test samples to analyze the simulation output results. Using the original data instead of the main factor to join the BP neural network model, after two systematic optimizations, the final model with high accuracy, low mean square error, and low average error was formed. When applying the newly added data for testing, an accuracy of 95.98% was achieved for the ranking prediction, and the average deviation of 0.0021 points for the score prediction, which fully reflected the feasibility and adaptability of the model.

1. Introduction

With the accelerating globalization process, a country needs to have a stronger competitive advantage to improve its competitiveness in the international market, and the embodiment of national competitiveness needs to be based on enterprises, so improving the competitive advantage of enterprises can better improve national competitiveness [1]. However, to achieve greater dominance in the increasingly severe international competitive environment, enterprises must occupy a dominant position in the existing established global value chain, and to achieve this goal, enterprises must continuously improve their strategic objectives and select scientific and effective performance evaluation methods. With the progress of science and the spread of knowledge, enterprises need to rely on scientific and effective management methods and experience to operate [2]. Enterprises should correctly understand the organizational restructuring brought about by change and should clearly understand the changes to the traditional management model brought about by the new knowledge and methods brought about by scientific development, and only then can they better cope with the impact of change on the traditional management model.

In recent years, the third industrial revolution, marked by high technology, has brought disruptive changes to the global competitive landscape, with knowledge and information beginning to replace traditional factors of production. Many countries have gradually begun to reduce their inherent dependence on traditional industrial enterprises and move into a new path of valuing and nurturing high-tech enterprises to enhance their international competitive drive [3]. The economic characteristics that distinguish high-tech enterprises from traditional industrial enterprises have also attracted extensive attention from scholars, who have found that these high-tech enterprises not only possess the economic characteristics of traditional industrial development but also exhibit characteristics of high risk, high growth, and high uncertainty. These characteristics make it difficult for them to raise funds for their growth through traditional financing methods (e.g., bank loans, etc.), and they can only provide financial support for the start-up stage through venture capital as a special financing method to solve their urgent needs. In the current competitive environment, knowledge and technology are undoubtedly the first productive force [4]. The advancement of technology has lowered the entry barrier of many industries, which has led to an influx of competitors, and with the advent of the Internet era, there has been an exponential growth of information, and the competitors faced by enterprises have changed from geographical to global. In the increasingly competitive environment, to dominate the market and achieve long-term development, enterprises need to respond quickly to external information and understand market changes promptly, which require a clear understanding of their business results and sufficient control over changes in the enterprise. This control not only is reflected in the understanding and knowledge of traditional financial indicators but also requires a good understanding of other indicators that affect the future development of the enterprise. The traditional performance evaluation system only focuses on the measurement of financial indicators, and the focus of the evaluation process is too obvious, which cannot correctly reflect the business results of the enterprise and cannot meet the future development needs of the enterprise [5].

With the continuous research and development of enterprise management theories, human resource management is also constantly pushing forward, and there are more ways and methods of performance appraisal. Compared with state-owned enterprises, the management of private enterprises and SMEs is more flexible, and they can freely draw on the successful experience of enterprises in Western countries since the reform and opening of large state-owned enterprises, and adjust at any time according to the internal and external environment. However, the phenomenon of incomplete human resource performance appraisal system and unsound mechanism of SMEs remains relatively prominent, and failure to solve these problems will not only have serious harm to the development of SMEs themselves but also have a serious negative impact on the national economy. Therefore, it is very important to take an SME as the research object, find out the common problems of its performance management, and propose countermeasures that are conducive to the healthy development of the enterprise itself and the healthy development of the economic society.

With the advent of the corporate system and the explosion of the industrial revolution that facilitated the expansion of many firms, scholars have been proliferating research on corporate performance evaluation and have begun to explore statistical performance assessment methods to strengthen the ownership of capital and control within the firm. Based on the increased importance of finance in companies, it is seen as an important indicator for performance evaluation, and companies around the world have generally embraced a performance evaluation approach that focuses on financial indicators [6]. Experts and scholars have conducted a series of studies on employee performance management. American sociologist Becker generation was the first to suggest that the usual ideas and work attitudes of corporate employees play a primary role in the development of the company, in which there are two important forms of human resources, respectively, the results-oriented evaluation system and the bonus form of salary system. Martin and Mahoney point out that performance management is an iterative process that includes “measurement-contracting-planning-monitoring-control-evaluation-feedback” [7]. Laloux et al. believe that performance management is a combination of systemic and individual factors influenced by many aspects [8]. He et al. used the method of principal component analysis and established the BCC model, the sample was selected from the publicly disclosed data of 36 domestic financial listed companies, and the input-output ratios of these sample companies were output to derive objective analysis and evaluation results [9]. Shi et al. selected a sample of 37 agricultural listed companies, supported by financial data for three consecutive fiscal years, to obtain the output of the sample results, and found that the lower efficiency in the financial and operational aspects of agricultural listed companies led to an overall low level of performance [10].

Random matrix theory (RMT) is mainly applied to quantum physics, biomedicine, social science, power grid distribution, spectrum sensing, and other fields. With the ability to obtain a large number of the fault and defect samples, Costantino et al. mined the correspondence between equipment critical performance and state quantities through the confidence of association rules, and then performed large data characterization of the time series of equipment state quantities through high-dimensional random matrix theory, studied the characteristic root spectrum distribution and circularity of high-dimensional matrices containing time series models, and analyzed the historical and current state information of state quantities [11]. Various spatially smoothed MUSIC algorithms have emerged one after another, which preprocess the signal to do decoherence. Spatially smoothed MUSIC algorithms reduce the dimensionality of the sample covariance matrix and therefore belong to the dimensionality reduction processing algorithms, which lead to array aperture loss and require uniform linear alignment of the array [12]. Nondimensional reduction processing algorithms are also heavily used in de-correlation processing, and there is no loss of array aperture in nondimensional reduction processing algorithms compared to dimensionality reduction processing algorithms, but nondimensional reduction processing algorithms are only used in specific array models. Hazen et al. proposed compressive sensing theory, which states that after representing the signal sparsely, the signal can be sampled at a low sampling rate, and then the original is passed through an observation matrix; the sparse signal is recuperated through the observation matrix [13]. Factor analysis refers to the study of statistical techniques for extracting common factors from variable groups, which can find hidden representative factors in many variables. Grouping variables of the same nature into one factor reduces the number of variables and tests hypotheses about relationships between variables.

The main views of scholars on the importance and specific meaning of performance management are almost unanimous: performance management is the effective management of employees to align their behavior and output with the goals of the company, thus promoting the common development of individuals and the company. Proper employee performance management can improve employees’ motivation and efficiency, increase interdepartmental interaction and communication, enhance the company’s organizational cohesion, and at the same time enable employees to make continuous progress [14]. However, in the West, both theoretical research and practical application research started earlier, and the research on performance management is relatively deep and mature in operation. Compared with the West, the research on performance management in China started late and, especially in the practical operation of enterprises, is still in the stage of development and improvement, and the gap with Western countries is large. At present, scholars in China have realized the existing gaps, tried to analyze and seek solutions from different levels and perspectives with different approaches, and put forward a series of effective improvement measures and methods.

3. Construction of BP Neural Network Model for Business Performance Management Based on Random Matrix Theory

3.1. Random Matrix Theory Model Design

Random matrix theory originated from quantum physics. In recent decades, the use of random matrix theory in mathematical statistics of large-dimensional data and the remarkable statistical results have made random matrix theory widely and deeply studied by the community. Random matrix theory can reflect the fluctuation characteristics of data by analyzing the correlation between random data and mapping the state of complex systems by data characteristics [15]. However, in practical engineering problems, the dimensions of the data are not all large-dimensional data, and there may be cases such as too little data. When the data dimension is in tens or hundreds, some properties of the random matrix still converge with considerable accuracy, which provides the possibility of using the random matrix theory for practical engineering problems. When the dimension of the matrix tends to infinity and the value of the number of rows/columns remains constant, the empirical spectral distribution (ESD) function of the random matrix will have certain properties, such as semicircle law (SCL), single ring law (SRL), and M P law.

The empirical spectral distribution function is a concept often used in matrix theory to characterize the distribution of the characteristic roots of a random matrix. For a matrix A of order , the empirical spectral distribution function is shown in

Among them, is the indicative function, when the conditions of the indicative function cannot be satisfied, the function value is 0; otherwise, the value is 1. The value of the empirical spectral distribution function is random, but the limit distribution of the test spectral distribution function has some special laws in some special cases.

Some elements of the “random matrix” are random variables, and their elements are randomly distributed in some probability space. The random matrix spectral theory is mainly to study the nature of the characteristic roots and eigenvectors of the matrix when the elements satisfy certain conditions, and the limit spectral distribution is when the empirical spectral distribution has a large enough amount of data with a high dimension. Random matrix theory has gradually emerged in the field of quantum physics research after continuous research developed a large-dimensional random matrix limit spectrum analysis theory [16]. The theoretical scope of the algorithm proposed in this paper is set in the theory of random matrix spectral analysis, which mainly studies the application of the empirical spectral distribution of random matrices and the nature of their limit spectra, which are usually nonrandom.

A Wigner random matrix is a type of common random matrix. Let the matrix be a Hermitian random matrix and satisfy that the upper triangular elements of the diagonal are independent, the expectation is 0, and the variance is 1. Then, the matrix is a Wigner matrix, or Wigner matrix for short, with dimension n. The generalized Wigner matrix requires the matrix to be an asymmetric array with diagonal elements and upper triangular elements independent of each other, and its probability density function to be

Because the radar received signal in the actual wireless transmission channel is very complex, for the convenience of the study, this paper uses a uniform linear array model, the impact between the array elements can be negligible, and each array element’s reception properties are only related to the position of the array element, not affected by the size of the array element, shape, and other factors. In this paper, we use the far-field narrowband signal as the signal source, the signal is a smooth all-state ephemeral process, and the signal center frequency is much larger than its bandwidth, that is, ; then, the following model can represent the narrowband signal.

The noise signal used in this paper is independent of the far-field narrowband source, the noise is Gaussian smooth, the mean value is set to zero, and the covariance matrix is expressed as .

Suppose an N-dimensional matrix A, which is a symmetric matrix. Each element of matrix A is an independent identically distributed random variable satisfying the standard normal distribution , and the diagonal elements satisfy the distribution . The Winger matrix formed through matrix A has an empirical spectral distribution that weakly converges to a semicircle law with a probability density function ofwhere is the probability density of the Winger matrix and is the eigenvalue of the Winger matrix. The effect of semicircular law is shown in Figure 1.

After establishing the state broadening matrix using the business state data and the influence factor data, the MSR in the linear statistic is used as the correlation analysis index between the matrix data to perform the correlation analysis between the data. The data of all nodes obtained from the simulation are used to construct the state matrix, and the values of each node are used as the influence factor matrix. The real-time sliding separation window size is 39 × 78. The single-loop theorem and the change of M-P law of the system state before and after the three-phase fault are obtained by the analysis as shown in Figure 2. The error of the test sample is 0.00368, which meets the requirements of the initial accuracy target of the model and is like the error of the training sample, and there is no overfitting phenomenon.

The eigenvector of a matrix is one of the important concepts in matrix theory. According to the description of linear algebra, a linear transformation can usually be fully described by the eigenvalues and eigenvectors of a matrix. The eigenvector (eigenvector) of a linear transformation is a nonsimplex vector whose direction is invariant under this transformation, but whose magnitude (mode) scales and changes, and the scaling under this transformation is called the eigenvalue. The eigenvectors contain important information about the covariance matrix, thus reflecting the information of rolling bearing operation status, but the random matrix eigenvectors are limited to dimensionality and are difficult to be applied [17]. The principal eigenvalue is the eigenvalue with the largest mode (or the largest absolute value if it is a real number), and the principal eigenvector is the eigenvector corresponding to the principal eigenvalue. The principal eigenvector can reflect the main characteristics of the eigenvector, so it is chosen to use the principal eigenvector to characterize the main information of the eigenvector. The results of the ratio of eigenvalue indicators to eigenvalues at different ratios of matrix ranks are shown in Figure 3, and the change of principal eigenvectors is shown in Figure 4. In comparison, the principal eigenvectors are less distinguished and do not change significantly in the case of arbitrary matrix size. The experimental data show that the use of BP neural network has higher advantages in extracting data features, and can obtain optimized feature quantities with better perception effect.

Because the eigenvalues can better capture the signal correlation, the spectral sensing algorithm can be used to count the distribution of different eigenvalues of the sample covariance matrix in the decision process by using random matrix theory, while the a priori information of the main user signal is not required in practical applications. It is important and urgent to study the low SNR array signal processing problem; however, the existing array spatial spectrum estimation method is based on the assumption that the number of snapshots is much larger than the number of array elements, which is mainly for the high SNR case and does not match with the low SNR application scenario (the number of array elements is comparable to the number of snapshots). In contrast, the random matrix theory studies the laws of the eigenvalues and eigenvectors of the covariance matrix of the array received data samples when the number of snapshots and the number of array elements vary at the same rate. In the new asymptotic regime, the results on the limits of the eigenvalues of the sample covariance matrix and the asymptotic results on the norm of the projection of the signal subspace of the sample covariance matrix to the real signal can be obtained by using the random matrix theory.

3.2. BP Neural Network Model Construction for Enterprise Business Performance Management

Factor analysis is a statistical technique that studies the extraction of common factors from a population of variables, allowing the identification of hidden representative factors among many variables. Grouping variables of the same essence into a single factor reduces the number of variables and tests the hypothesis of the relationship between variables. Factor analysis achieves the goal of streamlining data by aggregating the original data into several different principal factors while retaining most of the information in the original data [18]. Factor analysis is mainly used to measure certain indicators that are hidden in a set of data but are deeper and cannot be easily measured. There are two types of factor analysis methods. One type is exploratory factor analysis, and the other type is validated factor analysis. Exploratory factor analysis does not assume the relationship between the factors and the measures in advance, but lets the data “speak for themselves.” Principal component analysis and co-factor analysis are typical methods. Validated factor analysis assumes that the relationship between factors and measures is partially known, i.e., which measure corresponds to which factor, although we do not yet know the exact coefficients. The factor analysis method adopted in this paper is the principal component method of invalidated factor analysis.

Assuming that the original data are x with n dimensions, denoted by , and their factor loading coefficients are defined as and the residuals as , then the factor analysis model is represented by the matrix as

The model shows that if , the purpose of simplifying the number of variables can be achieved. The statistical significance of the factor loading coefficient is the correlation coefficient between the ith variable and the jth factor, which is called weight in statistical terms to indicate the dependence of D on X.

Before factor analysis, to ensure sufficient correlation between variables, it is necessary to do the KMO test and Barlett’s sphericity test on the original data first. KMO statistic is used to judge the correlation between variables by comparing the magnitude of the simple correlation coefficient and partial correlation coefficient among variables; when the correlation is strong, the partial correlation coefficient is much smaller than the simple correlation coefficient, and the KMO value is close to 1. In general, and are very suitable for factor analysis; is suitable for factor analysis; is still suitable for factor analysis; is very poor for factor analysis; when is not suitable for factor analysis. is an indicative function. When the condition of the indicative function cannot be satisfied, the function value is 0; otherwise, the value is 1.

Key performance indicator (KPI) is a model of performance appraisal based on several key indicator systems that best represent performance distilled through analysis of job performance characteristics. KPI must be a key indicator to measure the effectiveness of the implementation of corporate strategy, and its purpose is to establish a mechanism to translate corporate strategy into corporate KPI assessment that reflects the management’s idea of quantifying and highlighting the main contradictions. Of course, the key to KPI is not the less the better but should capture the root of performance characteristics. Many countries have gradually begun to reduce their inherent dependence on traditional industrial enterprises and have entered a new path of attaching importance to and cultivating high-tech enterprises to enhance their international competitiveness.

Corresponding to the hierarchical analysis method, the comprehensive enterprise performance evaluation index system constructed in this paper can be divided into three levels: the first level is financial and nonfinancial indicators, the second level divides the financial indicators into five aspects of accounting revenue, solvency, asset quality, growth ability, and management operation ability, divides the nonfinancial indicators into two aspects of enterprise stability and innovation ability, and then divides the secondary indicators to form a system containing. The complete performance index system of Internet finance-listed companies contains 7 tertiary indicators [19]. Therefore, this paper also selects 5 financial indicators based on the five aspects of enterprise capability from the financial perspective. With the continuous development of the times, the performance evaluation of the enterprise needs to focus not only on the economic benefits it generates but also on its innovation capability and the robustness of the company. Whether an enterprise has strong innovation and development ability determines the long-term development of the enterprise, and the robustness of the company is also the key to protecting the rights and interests of customers and reducing investment risks. Therefore, it is also necessary to measure the innovation ability of a company and the impact of its robustness on social stability. Therefore, this paper selects company stability B6 and innovation development capability B7 as nonfinancial performance indicators. The matrix is a Hermitian random matrix, the upper triangular elements of the diagonal are independent, the expected value is 0, the variance is 1, then the matrix is a Wigner matrix, referred to as a Wigner matrix, and the dimension is n-dimension.

It has been shown that all continuous functions of closed intervals can be simulated by learning through BP neural networks with hidden layers, which can find the functional relationships between data and are relatively easy to implement in MATLAB. If the number of hidden layers and the number of elements of each layer are expanded infinitely, the BP neural network can output the results within the predefined accuracy range, and if the number of nodes is too large, it is easy to reach the state of “overfitting” and deviate from the actual situation, so the number of hidden layers and the number of neuron nodes should be reduced within the predefined accuracy range as much as possible [20]. Therefore, the number of hidden layers and the number of neuron nodes should be reduced as much as possible within a predefined accuracy interval to make the neural network more compact. To determine the number of input and output nodes in the BP neural network model, the number of nodes in the input layer of the BP neural network is 7, and the number of nodes in the final output layer is 1, which is the comprehensive score of the enterprise performance level, as shown in Figure 5.

In the BP neural network, the relationship between the hidden layer and the output layer is as follows:(1)Each node of the input layer must perform a point-to-point calculation with each node of the hidden layer. The calculation method is weighted summation + activation.(2)Use each value calculated by the hidden layer, and then use the same method to calculate with the output layer.(3)The hidden layer uses sigmoid as the activation function, and the output layer uses purelin. This is because purelin can maintain the previous numerical scaling of any range, which is convenient for comparison with sample values, while the numerical range of sigmoid can only be between 0 and 1.(4)At first, the value of the input layer is propagated to the hidden layer through network calculation and then propagated to the output layer in the same way. The final output value is compared with the sample value to calculate the error. This process is called forward propagation.

Firstly, the number of neurons in each layer needs to be clarified, the performance evaluation system of Internet finance-listed companies has been established above, and 7 performance evaluation indexes have been determined from different perspectives; thus, the number of neurons in the input layer is determined to be 7, which is recorded as i = 7. The final output result of the neural network is the comprehensive performance score of the company, so its output layer contains one neuron, which is recorded as j = 1. Then, the number of neurons in the hidden layer needs to be calculated. The convergence of the neural network is closely related to the proficiency of the hidden layer.

In the BP neural network constructed in this paper, the output layer is the function purely, and the implicit layer of the transfer function is the tensing function, which is one of the most used sigmoid functions. For the BP network to be parameterized, the number of training is set to 1000, and to speed up the training of the model, the trained function is used, which has strong learning ability, improves the robustness of the model, and reduces the situation of local minima in the results. All other parameters not specified in this paper use MATLAB initial default values.

Due to the data collected from the Internet, some of the data are missing, a total of eight variables are missing; to better follow up, this paper adopts the linear difference method to fill in the missing values; in addition, different data have different levels and units of measurement, to make different data play a role in the same model; to eliminate the impact of differences in the magnitude of different indicators, it is necessary to standardize the data; this paper adopts the standardization method, that is, the mean-standard deviation method (Z-score). These high-tech enterprises not only have the economic characteristics of traditional industrial development, but also show the characteristics of high risk, high growth, and high uncertainty.

4. Analysis of Results

4.1. Random Matrix Theory Model Performance Evaluation

In this section, the MUSIC algorithm, the weighted subspace (WSF) algorithm, and the random matrix singular value-based weighted subspace (RMT_E) algorithm are simulated. 15 uniform linearly arranged sensor arrays are used to receive signals, and the number of signal sources is 3. The number of simulations is 10000. When the noise power is set to 1, the target source signal is not correlated with the noise signal. The spatial-spectral functions of MUSIC, WSF, and RMT_E are obtained under the condition that the number of snapshots is 10 and the SNR is −8 dB, as shown in Figure 6, where the search angle range is [−180°, 180°].

The MUSIC algorithm has peaks at angles other than the true body angle when the source is correlated, and if these peaks are too large, it is easy to cause misjudgment. The spatial spectra of the WSF algorithm are like those of the RMT_E algorithm, but the overall deviation of the peaks in the 40° and 70° directions is slightly lower than that of the RMT_E algorithm, and the deviation of the peaks in the 40° and 70° directions is significantly smaller than that of the MUSIC algorithm. The deviation of the peaks in the 40° and 70° directions is significantly smaller than that of the MUSIC algorithm.

The same linear characteristic statistic (mean spectral radius) of RMT cannot accurately represent the statistical information of all partitioned state matrices; i.e., the mean spectral radius does not apply to all dimensional matrices. Therefore, different eigenstatistics need to be customized according to different engineering requirements to make RMT better applied to different dimensional matrices and make RMT more effective in business performance management applications. Hierarchical contribution analysis is used in the context of nonlinear models to solve the magnitude of the explanatory power of each input indicator to the output indicator [21]. Hierarchical contribution margin analysis based on neural networks is often used in the screening of important influences and indicators in practical modeling, and the selection of variables in forecasting studies. The important factors derived from the contribution rate analysis often have a strong explanatory power for the dependent variable (i.e., the output variable); moreover, the method can eliminate some noisy variables that have little influence on the dependent variable, so the new set of input variables derived from the method can also help to build a more accurate and stable forecasting model. Based on this paper, the RMT-BP method is proposed. Based on the increasing importance of finance in the company, people regard it as an important indicator of performance evaluation, and companies around the world have generally accepted the performance evaluation method with financial indicators as the core.

Due to the small data dimension, the eigenstatistic appears as a dramatic waveform after 6 s and changes randomly and unpredictably. Therefore, as the dimensionality of the data becomes smaller, the characteristic statistic used in the figure gradually loses its statistical effect. As can be seen from Figure 7, when the state data matrix is large enough, the characteristic statistics can reflect the overall state of the state matrix well, and the effect will become better as the dimensionality gets higher. The results are different with the size of the state data, and the analysis effect is different with different feature statistics. Among the four eigenstatistics used in the paper, the mean spectral radius and standard deviation have the smoothest effect and the best effect. The maximum eigenvalue corresponds to a more violent jittering curve and has the second-best result. The minimum eigenvalue does not change significantly, and its statistical effect is the worst.

The method of extracting the system features using RMT and then obtaining the optimal feature statistics using CNN and PCA has a large improvement in inaccuracy. The results obtained using a single feature statistic have a relatively high rate of false positives and misses, such as the maximum eigenvalue. It indicates that the localization method combining RMT and other deep learning algorithms has more accurate localization results compared to the single random matrix theory method. The correct rate using RMT-PCA is improved higher compared to the single feature quantity, but its misclassification rate is still higher, with about 1.12% lower accuracy, 0.64% higher misclassification rate, and 0.48% higher miss rate compared to the RMT-BP-based method, indicating that the advantage of using BP neural network in extracting data features is higher, and the optimized feature quantity with better perception effect can be obtained. The manifestation of national competitiveness needs to rely on enterprises, so improving the competitiveness of enterprises can better improve the national competitiveness. In summary, the proposed RMT-BP business operation anomaly method can be applied to matrices with different data degrees and can better predict business operation anomalies caused by different accidents. Compared with the four models based on MSR, maximum eigenvalue, improved maximum eigenvalue, and RMT-PCA, the RMT-BP business operation abnormality method has higher accuracy, lower misclassification rate, and lower omission rate.

4.2. Training and Simulation Results of BP Neural Network Model for Business Performance Management

This paper uses MATLAB language to implement the above three sections based on the random matrix theory of business performance management neural network model, MATLAB is an interactive system integrating data analysis, matrix operations, and nonlinear dynamic systems, especially for the modeling and simulation of nonlinear dynamic systems has very good applicability, and is more convenient to use, data visualization interface is more intuitive and easier to understand. MATLAB has many callable toolboxes, including many fields, such as data acquisition, control system design, LMI control, robust control, neural network, and differential equation solution, which greatly facilitates the use of the user. The MATLAB version selected in this paper is the R2016a version, and the Neural Network Toolbox, Dynamic Simulation Simulink Toolbox, Partial Differential Toolbox, and Image Processing Toolbox are installed. After installing this version, you can directly call the toolbox functions for program writing, avoiding the tedious link of programming and debugging matrix operations in neural networks, making the program more reliable. At the same time, the MATLAB program interface can interact with other programming languages such as the C language, which makes it more widely applicable. The linear difference method is used to fill in missing values. In addition, different data have different dimensions and measurement units, to make different data play a role in the same model and eliminate the influence of the difference in the magnitude of different indicators. In this paper, it is found that the optimal network training speed, effect, and generalization ability are achieved when the number of nodes in the hidden layer of the BP neural network is 7, the learning rate is 0.1, the maximum number of training times is 1000, and the target error is 0.0115 after comprehensive comparison through experiments. (After learning and training, the final constructed three-layer neural network structure goodness of fit was shown to be 0.98762 with a mean square error of 0.00362. After the optimization of the weights and thresholds by the genetic algorithm, the network reached the set target accuracy when the model was trained to step 7, at which time the training was completed). And the error of the test sample is 0.00368, which meets the requirement of the accuracy target initially set by the model and is like the error of the training sample, and there is no overfitting phenomenon.

To show more intuitively the degree of fitting of the established BP neural network model to the original samples, 100 samples were randomly selected and substituted into the above three-layer neural network to obtain the fitted output values of the network, the actual values of the selected samples were used to compare with these fitted output values, and the test results are shown in Figure 8. The results are shown in Figure 8, where the real values of the samples are represented by the solid line dash and the fitted output values of the network are represented by the hollow origin, which shows that the fitting results of the network are ideal and the error between the output values and the real values is small, which provides a reliable basis for the calculation of the contribution rate below.

Based on the performance evaluation model constructed above, this paper will conduct a comprehensive comparative analysis of the evaluation results from the financial performance perspective and the nonfinancial performance perspective for each of the six test sample companies. The following figure shows the graph of simulation output results obtained from 36 samples of six companies from 2016 to 2021 after BP neural network processing. It is necessary to customize different characteristic statistics according to different engineering requirements, so that RMT can be better applied to different dimensional matrix analyses, so that RMT has better effect in the application of enterprise performance management, as shown in Figure 9.

The prediction model built by using the main variables as network inputs after the reduction of contribution rate analysis shows higher fitting and prediction accuracy, which is less prone to overfitting and more satisfactory after further improving the target accuracy based on the previous chapter; while the network model in the previous chapter also has higher fitting and prediction performance, but there are some redundant indicators due to the pursuit of index comprehensiveness as much as possible. The network model in the previous chapter also has high fitting and prediction performance, but there are some redundant and even interfering indicators due to the pursuit of being as comprehensive as possible. The model in this chapter further improves the accuracy of the model while simplifying the input indicators and reducing the user’s workload, which is proof of the reasonableness of the screening indicators by contribution rate analysis. It should also be noted that the fifth group of cases (training sample fitting performance meets the requirements but test error fails to meet the target requirements) occurs because, in general, if the selected training samples are mostly of the same type, the knowledge that such samples can acquire has some inherent deficiencies, and thus the generalization ability may be weaker and the prediction error larger when predicting. When the training sample can contain relatively rich information or rules, such information or rules can be fully learned by the model as knowledge in the process of network training, and thus the corresponding generalization ability is higher and the prediction error is smaller. For the above reasons, 10 training and testing sessions were conducted to examine the training learning and prediction performance of the model itself to make the conclusions more convincing. The test results also show that the model has high stability in training learning and prediction performance.

5. Empirical Conclusion

This paper focuses on the empirical analysis, using factor analysis means to obtain the business performance scores and rankings of the firms according to the method documented in the previous paper. Next, the BP neural network model was trained using the normalized raw data as the input layer and the factor scores as the output layer. The model was optimized twice more, once for the training method, and finally, the Bayesian regularization method was chosen, which has the advantage that the mean square error and the average error can be reduced to low after multi-generation convergence; the second time is the learning rate optimization, which can effectively solve the overfitting problem and make the mean square error and the average error of the test set closer to the training set. After the two optimizations, the model can predict the performance scores from enterprise data very effectively and has very good generalization ability. In the future, whether it is a new entrant or a newly listed enterprise, the model can be used to calculate the performance scores and compare them with other enterprises in the same year.

6. Conclusion

In this paper, seven data indicators including five types of financial data and two types of nonfinancial data are selected, and seven main factors are extracted by adding random matrix theory through data screening and preprocessing, standardization, etc. The factor scores of the seven main factors combined according to their variance contribution rates are used as the target values for BP neural network model training, and an enterprise business performance evaluation model based on factor analysis and BP neural network analysis is established. The hierarchical analysis method uses the scientific weights of each index derived from the expert scoring, which has a strong logic, while the BP neural network has a strong learning ability and forms a stable network structure by analyzing the nonlinear functional relationships among data through autonomous learning. From the test results, the RMT-BP model is more accurate in evaluating the business performance of enterprises, avoids the model from falling into the “dead node” of local optimum to a great extent, and improves the accuracy and practicality of the model, which has higher clustering accuracy and faster convergence speed than the traditional performance management and is more applicable to the evaluation of enterprise performance. It has better applicability to the evaluation of corporate performance and strong generalization ability and can be used for all corporate performance evaluations. Due to the uneven disclosure level of CSR indicators, this may lead to a decrease in simulation accuracy of the BP neural network and may not fully utilize the superiority of the BP neural network. In future studies, the latest data can be selected as samples to continue using the model.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This study was supported by the “Block Chain Thinking and Enterprise Strategic Management Innovation Development Research” (project number: 2020SK004).