Abstract
Physical education curriculum has been paid more and more attention by teachers and parents, and having a healthy body is the foundation. School sports competition is also more and more concerned by major researchers, and scholars have produced in-depth research and analysis of sports competition results prediction because prediction results can better let teachers carry out appropriate sports training for students, so as to achieve the best learning effect. The construction of the prediction model and whether the performance and universality of the model after construction are suitable for predicting sports competitions have also become a major research point. Deep neural network is a complex network method to analyze the structure of the human brain, which plays a core role in the field of sports planning and performance prediction and can know the future performance of athletes or students in advance in sports competitions. This paper establishes the autoregressive summation model prediction model, the complex neural network prediction model, and the improved complex neural network prediction model. It is concluded that only the improved BP neural network model has a remarkable effect on performance prediction, and the prediction value obtained by this prediction model can reduce the systematic error of prediction, so that it can better infer the performance prediction of sports competitions in China and plan which sports events are suitable for which prediction model.
1. Introduction
We live in the education by leaps and bounds in the environment, and our physical education curriculum content and structure of the construction is becoming more and more complete and having rapid development. Based on humidity, dew point, wind speed, and other meteorological parameters, a depth neural network (DNN) model is proposed to predict the minimum and maximum temperature. A particle swarm optimization algorithm is applied to select the correlation and important features of the data set to improve the prediction accuracy of the model. In the face of the multiobjective sinusoidal algorithm (MOSCA), the objective function is optimized [1]. The indexes of the optimized objective function are data rate, signal-to-interference-noise ratio (SINR), power consumption, and energy efficiency, and then the optimized objective function is allocated to a neural network for resource allocation [2]. It expounds a new deep neural network convolution layer-variable convolution (vConv) layer, which learns the kernel length of data adaptively by its own cycle to realize the motif recognition of data sets with high throughput [3]. It proves the effectiveness of pretraining neural networks on different data sets and shows that in many practical cases, the convolution layer can be replaced by a smaller fully connected layer, and the accuracy degradation is relatively small [4]. In this paper, a machine learning (DL) method is proposed to accurately obtain the performance of components and obtain the key features of typical components, and the data set is prepared and trained based on the extracted indicators [5]. It introduces the method for fault damage. Because of its powerful feature extraction ability, this method can extract more advanced and abstract fault features from massive data. Experiments show that deep neural network has better feature learning and classification performance in the field of fault diagnosis [6]. This paper introduces the background of semantic segmentation, then divides the semantic segmentation methods based on deep learning into five categories, and presents the advantages and disadvantages of each category [7]. The surrogate model is established by using polynomial chaotic expansion and a deep neural network, and the Sobol exponent needed to identify the influence of soil parameters on dam behavior is calculated [8]. In this paper, a new pruning-based DNNs model is proposed without affecting the expression ability of DNNs. By mining more compact structures and learning the effective weights, the computational overhead of DNNs is reduced [9]. We replace these empirical parameters with dynamic values of machine learning, which perfectly improves the accuracy of the extended model and uses complex neural networks to generate dynamic values to reproduce orbital energy and density based on density functional theory [10]. In this paper, the prediction ability of deep neural network relative to other machine learning technologies is established, and the future range of deep learning in multiparameter time series prediction is shown [11]. Our results are helpful to improve the estimation performance of structural variables of Arctic forests by using the concepts of image sampling and input features proposed in this paper [12]. This method also has the advantage of transfer learning; that is, DNN trained on one battery data set can use less training data to improve the curve estimation of other batteries running in different scenarios [13]. It will make a simple scientific overview of machine learning [14]. Learning-based computer generated holography (CGH) provides a fast way to generate holograms for holographic displays [15]. In REPAID, the multifocus image is first reconstructed from a single all-optical image and then up-sampled by a specially designed depth neural network suitable for real scenes, and finally, a full-focus image with the high spatial and temporal resolution is generated [16]. It introduces a regularized chain of deep and complex brain structure networks to handle classification tasks from multiple annotators [17]. A complex brain structure network for linear B cell epitope prediction is introduced [18]. The most relevant deep learning-based methods and the most advanced graphic page object detection in document images are discussed [19]. Materials informatics is an emerging field, which allows us to predict the properties of materials and has been applied to various research and development fields, such as materials science [20]. A new two-layer depth neural network structure is proposed, which can reconstruct the self-organized humanlike deformation shape from the depth framework by combining the inherent parameters of the camera [21]. Deep neural network for privacy protection becomes essential because of the need to maintain personal privacy and confidentiality of sensitive data and has attracted the attention of many researchers. With the wide application of neural network as a service in the unsecured cloud environment, the importance of privacy protection network is increasing day by day [22]. CNN is a new image recognition technology. Compared with the standard manual feature extraction method, it does not need explicit feature engineering and extraction and produces efficient results [23]. Its performance is satisfactory in both novelty detection and fault diagnosis, which is superior to other advanced methods. This study proposes a new fault diagnosis method, which can not only diagnose defects of known types but also detect defects of unknown types [24]. We propose a new algorithm called depth feature selection, which is used to estimate sparse parameters and other parameters at the same time [25].
2. Concept and Characteristics of Complex Network Method of Brain Structure
2.1. The Concept of Artificial Neural Networks
Human brain is the most complex, perfect, and effective information processing system known by human beings in exploring unknown fields. It is the advanced product of biological evolution and the cornerstone of advanced spiritual movements such as language, thinking, and emotion of the human brain. At present, human beings have little knowledge of this field. For a long time, scholars have been studying neural networks through the analysis of a series of disciplines such as neurology, psychology, cognition, mathematics, electronics, and computational science and want to dissect the structure of the human brain and its massive information processing modes. Using the complex network structure characteristics of the brain, an intelligent system similar to the human brain, which can perform some functions, is designed to deal with massive information and solve the complex problem of blending different data. It is the core goal of the development of science and technology to replace part of the labor of the human brain with a machine structure composed of electronic parts. Computer is an information processing system which uses some electronic components to carry out some memory, calculation, and information processing functions of the human brain. The speed of every electronic component in modern computers is as high as nanoseconds, while the reflection time of every nerve cell in the human brain is only millisecond units. Therefore, the complex neural network method of the brain is only a neural network which can complete some ideal function after artificial construction and processing on the basis of cognitive understanding of the brain neural network. It is a mathematical network method close to the idealized human brain nerve structure, and it is also a data processing structure based on imitating the complex nervous system structure and function of the brain. In fact, it is a complex network structure constructed by a large number of simple components interconnected with each other, which has complex nonlinearity and can carry out complex logic operations and realize some curvilinear relationships.
2.2. Features of Artificial Neural Networks
Although an artificial neural network is an idealized network structure based on brain structure, the complex network structure of the brain is different from the current computer and artificial intelligence structure. It has many similarities with human intelligence: the performance of a single neural unit is relatively weak, but a large number of neurons converge into a network structure, which will have interoperability and parallel processing functions and is very powerful. It has the following characteristics:(1)Inherent Parallel Structure and Parallel Processing. The similarity between an artificial neural network and the human brain in structure is parallel and interoperable, and the processing sequence is also parallel and simultaneous. Neurons in each layer process data at the same time. That is to say, the function of neural network processing data can be distributed and can be carried out simultaneously on multiple processing units.(2)The Storage of Knowledge. In the complex neural network model of brain structure, knowledge is not only stored in a fixed storage unit but all the memorized information is stored in the weights of interconnection between neurons. The content of stored information can not be seen from the weights of a single neuron, because the knowledge is stored in a distributed way.(3)Strong Fault Tolerance. The automatic death of brain cells every day will not affect our memory ability and thinking ability. Similarly, artificial neural networks have strong error tolerance; that is, local or partial neuron damage will not affect the subsequent and even global activity changes.(4)Self-Adaptability. Artificial neural network can acquire various abilities through learning. Input the input and ideal output modes into complex networks; the network adjusts the connection weights between neurons in each layer of the system according to the basic information extracted from the samples given by the original learning algorithm and stores these basic information in a specific system in the form of connection methods between neurons until the network reaches a stable state.
3. Sports Competition Prediction Algorithm
3.1. Traditional Statistical Methods
3.1.1. Determining the Time Series Prediction Model
(1)No Change Method. The premise of this method is that the sports competition results in T + 1 period are the same as those in T period. Because this model is too idealistic, it may produce a higher prediction value for Chinese sports competition results with an obvious improvement trend.(2)Proportional Change Method. It is considered that the results of sports competitions change by a certain percentage with time. The expression is as follows:(3)Moving Average Model Method. That is, the average of observed values in the past several periods is used as the predicted value of the prediction period. The expression is as follows:(4)Weighted Moving Average Model. Different weights are given according to the time when each data are away from the prediction period. It is generally believed that the closer the time is away from the prediction period, the greater the weight should be given because the recent prediction value has stronger prediction ability and accuracy. The expression is as follows:(5)Exponential smoothing model.
It is a special weighted average method, which uses the weighted average of the previous observation value and the predicted value as the predicted value of the next period. The calculation formula is
3.1.2. Stochastic Time Series Model
This is the most commonly used method in the time sequential model method to find out the trend of sports competition results with a certain factor by regression method. The independent variable in the model can be any one of the influencing factors of sports competition results. People usually use time as the independent variable.
3.2. Evaluation Criteria of Sports Competition
The parameters of absolute average error, correlation coefficient, and reliability of output data are used to evaluate the model. Because they are neither affected by the size of the sample nor by the restriction of sample units on other different models, the comparability is strong. They can be obtained by the following formula.
Absolute mean difference:
Correlation coefficient:
Data reliability:where , otherwise j = 0.
3.3. Artificial Neural Network Algorithm
The expression of the input sum of neurons is as follows:
Neuron output:
Artificial neural network learning is carried out under the condition that the input mode and ideal output mode are known. Its comprehensive error often adopts the sum of squares of errors.
3.3.1. BP Neural Network
(1)Calculate the output values of hidden layer and output layer nodes: Hidden node output: Output node:(2)Calculate the error of output layer and hidden layer: All sample errors: One sample error: Output layer node error: Hidden layer node error:(3)Correct the node weights and closed values of output layer and hidden layer:
Output layer node weight correction:
Output layer node threshold correction:
Modification of node weight value in hidden layer:
Hidden layer node threshold correction:
3.3.2. BP Algorithm Improvement
(1) Additional Momentum Value Method. The method is based on the back propagation method, adding a dynamic change quantity proportional to the last weight and threshold value to each change of weight and threshold value and generating new changes of weight and threshold value according to the back propagation method. The adjustment formula of weight and threshold value with additional momentum factor is as follows:
(2) Adaptive Learning Rate Method. Adjusting the learning rate based on self-adaptation rules is beneficial to shorten the time. The learning rate is too small and the convergence rate is too slow; if the range of learning rate selection is too large, it may change too much, resulting in data dispersion. Therefore, a unique improved algorithm suitable for adaptive adjustment appears, and its weight and gap value update expression is
Adjustment formula of adaptive learning rate lr is as follows:
(3) Levenberg-Marquardt Optimization Method. The weight and threshold update formula is
4. Experiment
4.1. Simulation Experiment
In order to effectively predict sports performance, 500 data samples of sports performance in a university were collected, of which 400 were used as training samples and 100 as test samples. 400 training samples are trained by a support vector machine algorithm, and the sports scores of 100 test samples are predicted by an online computing platform. The prediction results and prediction error curves obtained by the online computing platform are shown in Table 1 and Figure 1.

Through the above-given experimental results, we can see that using this model to test the sports performance prediction results are very ideal, the experimental results show that using this model can effectively predict sports performance.
The predicted results and actual results of the above-mentioned sports events are counted into a bar chart as shown in Figure 1.
Five groups of sports data are selected from the training sample data set to test and verify whether the improved BP neural network has good training. The error curve after inspection is shown in Figure 2.

4.2. Model Comparison
The error of predicting sports results by three models is compared, and the comparative data are shown in Table 2.
According to the experimental data in Table 2, we can get that the improved neural network model with complex brain structure has higher accuracy in predicting sports events. Is the prediction result of this model equally accurate for different types of sports events? We further compare and analyze the four kinds of sports: running, ball games, long jump, and improvement. For example, the experimental data are as follows.
Select the results of four kinds of sports events in a high school sports meeting to test the universality of the model.
The model performance indicators for running sports competitions are shown in Table 3.
The model index comparison data of the above-given experimental results are counted into a bar chart as shown in Figure 3.

Model performance indicators for ball games are compared and are shown in Table 4.
The model index comparison data of the above experimental results are counted into a bar chart as shown in Figure 4.

The model performance indicators for long jump sports are shown in Table 5.
The model index comparison data of the above experimental results are counted into a bar chart as shown in Figure 5.

Model performance indicators for high jump sports are shown in Table 6.
The model index comparison data of the above experimental results are counted into a bar chart as shown in Figure 6.

4.3. Contrast Experiment
Through the previous results, it is proved that applying the single prediction of various sports events, the prediction results superior to each single item can be combined, which can further effectively improve the prediction accuracy. Therefore, this study continues to use the improved neural network model construction method of complex brain structure to establish a prediction model for all the data from 2018 to 2021 and predict the comprehensive evaluation index values in these years. The prediction results are shown in Table 7.
5. Conclusion
Through the preliminary establishment of the autoregressive summation model prediction model of sports competition development in China, the neural network prediction model of complex brain structure, and the improved network prediction model, it is concluded that the improved complex brain structure model prediction model has a remarkable effect, and its research results are as follows:(1)In both the neural network model and traditional regression model, a neural network with a complex brain structure is more inclined to predict the results of sports competitions(2)In the model comparison experiment, we predict the individual items of various sports events, and the prediction effect of the improved prediction model is inconsistent(3)Among the four kinds of sports in the article, the prediction effect of running is the most obvious, while the prediction effect of the other three kinds is poor(4)The prediction error of the improved neural network model is reduced by about twice as much as that of the original model
Data Availability
The experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding this work.
Acknowledgments
This work was supported by the Social Science Foundation of Hunan Province: Research on the Development of Urban and Rural Coordinated Public Sports Service under the Background of Rural Revitalization (Grant No. 20TBA042).