Abstract

This paper adopts the analytical method of intelligent programming to conduct in-depth research and analysis on university education management and designs a corresponding university education management model to be applied in actual teaching. Based on the basic theories and methods of big data distributed data mining and machine learning, automated machine learning, and big data programming computational methods, this paper combines the importance and technical challenges of the algorithms themselves and the background of the practical application needs of the industry and firstly selects a series of data mining and machine learning algorithms that are commonly used based on high complexity, outstanding computational efficiency problems, and difficult to design distributed algorithms. The research on efficient large-scale distributed parallelized data mining and machine learning methods and algorithms is carried out. As an innovative teaching model, the experiential teaching model focuses on the cultivation of individual students’ independent learning ability and subjective initiative, which not only can effectively activate the classroom atmosphere and improve the teaching effect but also meet the requirements of classroom teaching for the development of the times. The empirical evidence of the experiential teaching model proposed in this paper is carried out, and the implementation process of the experiential teaching model of high school physics is further improved. In terms of the overall implementation effect, the experiential teaching model proposed in the paper can deepen students’ understanding of concepts and physical laws, thus enabling them to solve physics problems better, and has a positive and active effect on enhancing students’ learning interests and attitudes. In the case of system security, the average comprehensive index of each unit is 61.11, 22 are higher than the average index, accounting for 61.11%, 14 are lower than the average index, accounting for 38.89%, and 23 management departments’ management systems passed the safety monitoring, accounting for 63.89%. At the same time, the proposed model also provides educational practitioners with a more reliable framework for instructional design, which has realistic reference value and significance.

1. Introduction

The Internet is a kind of information technology, including applications such as big data, cloud computing, and artificial intelligence. All industries in society are being influenced by the development of the Internet and are using its technology to promote their development [1]. From a certain perspective, it can also be understood that “Internet+” is the application of all traditional industries to Internet technology. The combination of the two is organic, and traditional industries are closely integrated with the Internet through information technology and complete their innovation by building Internet platforms and other applications. At the present stage, society is in an environment of globalization and informatization, and the Internet profoundly affects the daily study and life of students in colleges and universities through various communication media. Facing the requirements of the new era, how to implement the fundamental task of establishing moral education, grasp the characteristics and growth dynamics of students accurately from the actual needs of local universities and students, and innovate the working mechanism based on the Internet platform is a systematic project that needs comprehensive reform from the concept of working mechanism to the way of realization [2]. It is of practical significance to deeply study the connotation of the new model of education and management of students in local colleges and universities on the Internet and practice the new system mode of nurturing students to achieve the goal of cultivating builders and successors for comprehensive development.

With the rapid development and popular application of computer and information technology, the industry application data is exploding, and society has stepped into the era of big data and the digital economy. As a strategic asset of the industrial enterprises and the country, big data has been officially listed as an important production factor in the data economy era and plays a key role in the development of the data economy [3]. This study not only is a theoretical discussion but also provides a reasonable implementation mechanism and strategy choice for the change and innovation of school art curriculum based on the theoretical analysis, which will provide a policy basis for the construction and change of school art curriculum in practice, and for redesigning art curriculum, fundamentally ensuring the selection of art education paths, goals, and models under new media in schools and the operation and implementation of specific art curriculum [4]. The average comprehensive index of each management unit is 75.56, 25 are higher than the average index, accounting for 69.44%, 11 are lower than the average index, accounting for 30.56%, there are 27 management departments with professional management systems, accounting for 27 75%, and there are still 9 management departments that have not built a professional management system. The overall goal of educational management is to promote, support, and sustain an environment for effective teaching and learning within educational institutions, but how these key goals are defined and the means to achieve them may vary widely depending on the educational system or level and different educational cultures. While striving to achieve these goals, arts education administrators also need to enlist and organize the resources available to a community through thoughtful and practical application of management principles to achieve the educational goals set by the government of the community [5].

Thus, arts education administrators at all levels of the education system must respond to the diverse educational goals set by society. Arts education management must also respond to global and regional changes, and at the same time, because developments in information technology may also directly affect teaching and learning, by changing the way the curriculum is taught and assessed, we can make management more meaningful. How arts education management as a discipline develops and meets the needs of the education system effectively is the question we need to think about, while education is swept influenced by the challenges of the technological, social, cultural, and economic changes that are sweeping the globe and that will determine how effective management practices will be. Computational thinking is not the thinking of a computer, but a way of solving problems for human beings, and learners learn not only to master technology or computer language but to use it to solve problems. The students will be able to internalize computational thinking by transferring the computational thinking method to real life and solving the problems related to life.

Data parallelism is one of the most used algorithm design schemes for big data parallelization, which adopts a “divide and conquer” parallelization design. First, the input dataset is divided into multiple partitions and stored in each compute node of the cluster [6]. Then, the expert node sends the computation task to each node, and all compute nodes read the locally stored data to complete their respective computation tasks in parallel. If the number of data partitions is p, then the computational concurrency is also p. An important feature of data parallelism is the migration of computation to data, where each compute node reduces network data transfer by accessing localized data [7]. Some do not exist in the system; therefore, the resultant data needs to be collected automatically by the system and filled in the questionnaire. Since the final evaluation and calculation based on the data is automatically carried out by the system, all data need to be aggregated into the evaluation system. When all the computational tasks of all nodes are finished, according to the processing logic of the algorithm, one way of processing is that the expert node collects the computational results of all nodes and aggregates them, and the computational job execution is finished. Another way of processing is to execute the shuffle process, and the computation results of each node will be shuffled to the specified compute node [8]. After all compute nodes collect the data from shuffle to local, the subsequent computation tasks are executed in parallel. It is proposed that experiential teaching should be based on the context created by teachers and centered on students’ experience, and emotional experience is one of the important experiences in teacher-student interaction [9]. Two teaching strategies are proposed as applicable to secondary school language teaching: contextualized hands-on strategy and holistic apprehension strategy. In the definition of experiential teaching, it is emphasized that the teacher’s guiding role in the teaching process should be brought into play to guide students to personally perceive knowledge, and the five-stage teaching model of entry passion, dialogue empathy, inquiry emotion, practice indulgence, and commentary analysis is proposed [10].

Cross-disciplinary computer programming is becoming increasingly important. STEAM education creates space for students to apply disciplinary knowledge to create products or solve problems, and to some extent, programming can be attributed to the disciplines of science, technology, engineering, and mathematics. The conceptual framework of using a programming language as a mathematics teaching and learning was proposed to achieve an early integration of programming and mathematics disciplines [11]. Using a qualitative research method of observational documentation, a model of STEM education that integrates robotics and visual programming is constructed, arguing that teachers and school leaders should plan to design and implement integrated STEM activities across grade levels and disciplines, focus on student solution processes and competency development, and explore other ways that might provide a platform for integrated STEM education [12]. As a tool for elementary students to create multimedia stories and integrate Brennan and Resnick’s computational thinking framework, math, and writing standards, the use of case studies demonstrates that visual programming creates effective connections for teaching computer science, math, and writing [13]. An agent-based visual programming tool was designed for elementary school students’ ecological science learning, and the Eco MOD research project blended computational modeling and ecosystem science learning through a 3D virtual forest ecosystem and a 2D visual block programming tool, providing opportunities for computational thinking development and scientific modeling in STEM education practices.

As a kind of implicit thinking, the cultivation of computational thinking also needs to pay attention to the cognitive process and behavioral performance of students in the practice process and to reasonably characterize their conceptual application ability, project disassembly ability, error correction, analysis ability, and problem-solving ability, to externalize the implicit computational thinking and help students iteratively revise and improve their works to optimize the effect of their works. Therefore, based on the perspective of programming behavioral representation, this paper records students’ operational and interactive behaviors of completing programming tasks with Scratch utilizing classroom recordings and realizes the mapping and representation of computational thinking by analyzing the relationship between learning behaviors and cognitive levels, aiming to reveal the cognitive levels of students’ programming activities and their implicit relationship with computational thinking, to provide a targeted design of visualization programming tasks and effectively develop computational thinking.

3. Design of the Smart Programming Analysis Method

Smart FD is designed to solve the problem of distributed data-parallel function dependency discovery in a data level partitioning scenario, which not only considers the correctness of distributed function dependency discovery but also ensures load balancing among computational nodes. The correctness of distributed function dependency discovery can be solved by data redistribution. The number of types of data redistribution is equal to the number of attributes. It is observed that most of the functional dependencies can be discovered after the first several types of data redistribution.

In the data preprocessing stage, the study proposes an attribute reordering algorithm based on skewness and cardinality. In addition, to reduce the network communication overhead during data redistribution, each record in the relational table is encoded by encoding the attribute values into IDs. The maximum ID of each attribute is associated with its cardinality. The record is encoded as a dictionary compression of all attribute values [14]. The size of the compressed dataset will be significantly reduced, and therefore, the data transfer overhead during data redistribution will be significantly reduced. In the function dependency discovery phase, to discover all function dependencies grouped by a given attribute, the study proposes the efficient distributed algorithm AFDD. Attributes with low skew and large base will be prioritized one by one. After most of the functional dependencies are discovered, to further improve the computational resource utilization, the study proposes the Batch AFDD algorithm to process all the remaining attributes simultaneously.

In a distributed environment, the attribute reordering proposed in this section plays a very important role in ensuring load balancing and high resource utilization. In contrast, for the stand-alone HyFD algorithm, the attribute reordering in its data preprocessing phase aims to improve the efficiency of function-dependent verification. The purpose of attribute reordering is completely different for the two algorithms. Next, the data preprocessing phase of the Smart FD algorithm is presented, as shown in Figure 1. Expand the scope of application of the indicator. At the same time, education informatization is constantly developing and changing. Therefore, the index system constructed should follow the developmental principle of dynamic changes. The index can be updated according to the evaluation results in the later stage to provide sufficient space for later adjustment and improvement. Attribute reordering considers the skewness of the attributes, the base, and the method. Figure 1 illustrates the process of attribute reordering, which consists of two main stages. First, attributes with a skew greater than a specified threshold are selected and considered as skewed attributes. Since skewed attributes will lead to a significant load imbalance after data redistribution, all skewed attributes are placed at the end. All skewed attributes are then sorted according to skewness.

Creative thinking is to make bold assumptions and conjectures about the problem at hand and to create new paths that others have not practiced to solve the problem. Pragmatic thinking is to solve the problem from the existing practical experience and summarize it according to the experience. In summary, analytical thinking is mainly used to decide the optimal solution to the problem, after rigorous logical deduction, and choose the most reasonable way to solve the problem; students are more commonly used; creative thinking is often the common way of thinking of innovators, to innovate a unique way to open new ways; practical thinking is to help people summarize the practical experience, the combination of theory and practice.

In reality, human intelligence is a balance of analytical thinking, creative thinking, and pragmatic thinking, everyone has these three types of thinking, but everyone is good at different ways of thinking; for example, students who are good at analytical thinking are more likely to listen to teachers’ guidance and learn to deepen their theoretical knowledge, students who are good at creative thinking are always full of imagination, and students who are outstanding at pragmatic thinking have strong hands-on skills and are experienced in life [15]. It is worth noting that excellent people tend to be a balance of the three kinds of thinking: those who think only analytically lack creativity and practicality, those who think only creatively do not know how to compare and apply, and those who think only pragmatically tend to have solidified ideas.

In addition, the cumulative proportional function (CRF) is defined as the proportion of all validated candidate function dependencies to the total candidate function dependencies after the data is redistributed k times. CRF(k) can be expressed as

Most of the candidate function dependencies are clustered in the first few attributes. The CRFs of the first 4 attributes are as high as 90%. Therefore, after the first several types of data redistribution, most of the functional dependencies can be found. The statistics of the attributes contain the total number of nonrepetitive values, that is, the base number and the number of occurrences of each nonrepetitive value. Since the relational tables are distributed horizontally across the computational nodes, MapReduce can be used to compute the statistical information for each attribute. Fast sampling (FS) means that, in each round of sampling, each data partition is sampled only once in all equivalence classes using the same sampling window. Early aggregation (EA) means that the sampling results of each partition, that is, illegal functional dependencies, should be aggregated as early as possible.

First broadcast the attribute statistics containing all the nonrepeating values and IDs to each compute node. Since only the statistics of attributes are broadcasted, the network communication overhead is low. Then, in all data partitions, record encoding is performed in parallel. In addition, the attribute order of the encoded records is consistent with the attribute reordering results. Record encoding can significantly reduce the network communication overhead during data redistribution. In addition, the function-dependent discovery process involves many attribute-value comparison operations. By encoding attribute values as IDs, the attribute comparison overhead is also significantly reduced, thus improving the computational performance of functional dependency discovery.

Illegal function dependencies can be discovered by sampling record pairs and comparing the records in each record pair. At least one attribute match in a record pair is required to reveal a potential illegal function dependency. The study proposes a distributed sampling method where the sampling process is independent for each partition and all data partitions sample record pairs in parallel, as shown in Figure 2. This step is used to verify the candidate function dependencies grouped by given attributes in the FD-tree layer by layer. By sampling, the number of candidate function dependencies is significantly reduced [16]. Thus, candidate function dependencies that are close to holding can be verified in parallel on all data partitions. Function dependency verification will generate a large amount of intermediate data with high memory overhead. In this section, a memory-efficient index-based verification method is proposed. A candidate function dependency that holds on all data partitions is the final legal function dependency. Otherwise, it is an illegal function dependency.

To further improve the performance of the AFDD algorithm, the design of a unified time-based efficiency metric and an adaptive sampling-verification switching strategy are investigated in the distributed sampling and distributed verification phases. In addition, before each round of function-dependent verification, the failure rate of the verification process is estimated by the distributed detection method. If the failure rate is relatively low, the distributed verification is executed directly. Otherwise, the distributed sampling is executed. When the sampling efficiency is lower than the validation efficiency, it switches to the validation phase. In addition, the attribute order of the encoded records is consistent with the attribute reordering result. Record encoding can greatly reduce the network communication overhead when data is redistributed. In addition, many attribute-value comparison operations are involved in the functional dependency discovery process.

In the distributed environment, the sampling process on all data partitions is independent, and the sampling on one data partition does not perceive the sampling results of other partitions, so there will be many redundant sampling results. Moreover, the sampling time overhead of each partition may vary significantly. To improve the efficiency of distributed sampling, the study proposes the fast sampling early aggregation (FSEA) distributed sampling mechanism. Fast sampling (FS) means that each data partition is sampled only once in each round of sampling using the same sampling window in all equivalence classes. Early aggregation (EA) means that the sampled results of each partition, that is, illegal function dependencies, should be aggregated as early as possible. In this way, all sampling results can be collected promptly, and redundant sampling overhead can be avoided. At the same time, based on the aggregated sampling results, the global sampling efficiency can be further evaluated, and a decision can be made whether to continue to execute the next round of distributed sampling based on the efficiency.

The education management information-based evaluation system can simplify the evaluation process and improve the efficiency of evaluation. Most of the current education management informatization evaluation systems are based on result-based data for evaluation, ignoring process-based data [17]. This system integrates big data thinking and technology to collect, clean, and evaluate data and combines result-based data and process-based data for evaluation, not just relying on subjective human judgment, so the evaluation results are more accurate and efficient.

To further revise the indicators and improve the system, assessment practice will be carried out based on the indicators and system. A university in a pilot informatization province is selected as the target of the practice, the practice plan is designed, and the system is used to collect data related to the management informatization of the university and to conduct an automated assessment of its management informatization level. Based on the practice, the proposed indexes and system are improved; reference suggestions are made for the development of management informatization of the evaluated university based on the evaluation results.

4. Analysis of the Application of University Education Management Model Construction

The principle of comprehensiveness means that the constructed index system should be well thought out and should contain all the elements of the research content, without duplication or missing, and the elements should interact with each other. Therefore, to reflect the application level of informationization comprehensively, accurately, and scientifically in university education management, all processes and links involved in informationization should be considered [18]. Students are more commonly used; creative thinking is often a common way of thinking for innovators and a unique way of development and innovation; practical thinking is to help people summarize practical experience and combine theory with practice. In the evaluation index system, the business involved in the application of informationization in college education management should be considered comprehensively, and the level indexes should be determined well and then gradually refine each index.

The whole index system can be regarded as a complete system, each index is a part of this system, and there should be a dialectical and unified relationship between each index, affecting each other, interdependent and independent. Therefore, the indicator system should be established systematically and hierarchically, from abstract to concrete and from global to partial, meticulously dividing each level of indicators.

In terms of the adaptability of the indicators, it is necessary to ensure that the indicator system can be adapted to different individuals and different environments and to extract their commonalities so that the indicators can be applied to all research objects. The index system constructed in this study can not only evaluate the level of educational management informatization in universities vertically but also compare with the level of informatization application in other universities horizontally to expand the scope of application of the indexes. Meanwhile, education informatization is constantly developing and changing, so the constructed index system should follow the development principle of dynamic change, and the indexes can be updated later according to the evaluation results to provide sufficient space for later adjustment and improvement.

The principle of operability refers to the strong operability of the index system and the removal of indicators that are not easy to collect, and the indicators should be simple, feasible, intuitive, and easy to obtain and try to ensure that the collected data can be quantified through scientific methods. This is conducive to the development of assessment and the analysis of results, so the principle of operability in practice is crucial to the realization of assessment, as shown in Table 1.

In the design stage, the students’ imagination is stimulated by the selection of task topics that attract their attention and lead them to create their artworks in various forms by fully grasping the characteristics of the Scratch visual programming tool. Production is the process of transforming design blueprints into physical objects, and the selection and addition of backgrounds, characters, and scripts are indispensable to the production of each work to provide a reference for students to analyze and produce complex works. Evaluation is the re-reflection and recognition of one’s work, taking full advantage of multiple subject evaluations to help students iteratively revise and improve their work to achieve the most optimal effect of their work. When most of the functional dependencies are found, to further improve the utilization of computing resources, the research proposes the Batch AFDD algorithm to process all the remaining attributes at the same time.

Interviews with students participating in the course were conducted to observe from them the achievement of cognitive goals and computational thought processes in learners’ creations, to understand more specifically students’ psychological experiences of programming with Scratch, and to help the researcher code learners’ programming behaviors more comprehensively [19]. The main reason for the above situation is that schools place too much emphasis on students’ academic performance as the main or even the only criterion for evaluation, thus neglecting other aspects of the requirements for student members.

At the same time, there is a lack of a scientific and perfect assessment system for party members. At present, a considerable number of schools are served by counselors or even students, and student construction is prone to change from person to person and increase in arbitrariness, lacking a scientific and relatively fixed standardized model. Based only on the existing inspection standards and evaluation system, students’ performance in one or several aspects can only be evaluated unilaterally, but it is difficult to obtain a comprehensive and objective evaluation of students, so it is also difficult to manage student party members in a targeted manner and promote their progress, as shown in Figure 3.

The data collection is based on the data required for the assessment indexes, and the data is collected into the assessment system. The data involved in this study mainly includes process data and outcome data, and the collection methods used for different data are different. Among them, process data may involve data from different management business systems, and the data are generated quickly and in large volumes, which need to be collected automatically by the system; some of the outcome data can be obtained from existing management business systems, while some do not exist in the system, so the outcome data need to be collected automatically by the system and filled in by the questionnaire in two ways [20]. Since the system will eventually carry out assessment and calculation based on the data automatically, all data need to be aggregated into this assessment system.

The bottom layer of the system architecture is the data layer, which is used to store all kinds of data of the assessment system, including assessment index data, index weight data, process data, result data, basic data, and logs. Assessment indicator data is the name of the indicator system and the level of each indicator entered into the database. By observation, most of the functional dependencies can be found after the first few data redistribution. Indicator weight data is the weight of each indicator in the overall weight. Process data refers to the data generated in the process of user operation, which excludes errors due to human subjectivity and is the most important part of the data layer and is the key data for information technology evaluation. Resulting data refers to the data reflecting the information technology construction, such as network bandwidth and consumables funding. The basic data include user tables, organization tables, and other basic tables of this system.

5. Intelligence into the Analysis Algorithm Results in Performance Results Analysis

To achieve a fair comparison, the metalearning function of autolearn is turned off by default. Moreover, the algorithm of autolearn and the hyperparameter selection space is the same as BOASF. In addition, the maximum evaluation time for a single model is limited to 120 seconds. Figure 4 shows the performance comparison results of BOASF and the other three baseline algorithms on 30 datasets. In the BOASF algorithm, the total number of evaluation rounds is 3 rounds by default. As can be seen in Figure 4, BOASF outperforms the other three algorithms on most of the datasets. Since BOASF utilizes Bayesian optimization for each betting arm, BOASF outperforms the SelectBest method that uses default hyperparameters. Random Restate uses Bayesian optimization for hyperparameter tuning, but the Random Forest model is not optimal and does not perform as well as the model selection function; the performance of autolearn and BOASF with the model selection function is inferior.

BOASF can achieve better performance than autolearn for different time budgets. With a time, resource of 2 hours, BOASF outperforms autolearn on 20 datasets, and the performance is equal on 3 datasets. When the time resource is increased to 4 hours, BOASF outperforms autolearn on 23 datasets, and the performance is equal on 3 datasets. How to meet the needs of the education system in an effective way is a question that we need to think about, and education is also affected by the challenges posed by the technological, social, cultural, and economic changes sweeping the world, which will determine the effectiveness of management practices. Therefore, BOASF can achieve better and more robust performance under different time budgets.

In addition, in most practical analysis scenarios, there is also the problem of category imbalance in addition to concept drift. Therefore, to reduce the performance impact on the global incremental model due to category imbalance, the current batch of data can be downsampled, and the incremental training of the global model can be achieved using the sampled data instead of the full data.

In Auto LLE, both the global incremental model and the local model use the Light GBM classifier. The slope of the Sigmoid function a is 1, the target data skew rate r is 0.5, and each local integrated model contains 5 classifiers. The same parameter configuration was used for all comparison methods. Each method was run 10 times, and the performance evaluation metric was AUC. Each method was evaluated by the average AUC on all test batches. Figure 5 shows the average performance ranking and median performance ranking of all methods on each dataset. The experimental results show that Auto LLE outperforms the other compared methods, confirming the effectiveness of the weighted integrated learning as well as the adaptive weighting design strategy. Among them, Auto LLE outperforms the short-only method, verifying that the global incremental model can effectively capture the long-term concepts in the dataset. In addition, Auto LLE outperforms the mean-with-inker method and verifies that the adaptive weight design can effectively solve the concept drift problem.

This section first investigates the proposed Auto LLE, an automated lifelong learning algorithm framework based on adaptive weighted integrated learning, which can effectively capture both long-term and short-term concepts by integrating the global incremental model and the local integrated model. In addition, to adapt to concept changes promptly, the study proposes an adaptive design and adjustment strategy for model weights based on time windows and error metrics. With the rapid development and popularization of computer and information technology, industry application data has exploded, and society has entered the era of big data and digital economy. Experimental results show that Auto LLE can efficiently and automatically capture concept drift and improve model prediction performance by automatically updating the model. However, the existing Auto ML techniques for traditional machine learning cannot meet the automated modeling needs in various complex scenarios such as end-to-end full-process big data analysis scenarios, resource-constrained scenarios, and lifelong learning scenarios. To this end, this section proposes an Auto ML method and algorithm for these different scenarios based on the research of basic machine learning theories and methods such as Bayesian optimization, reinforcement learning, and integration learning and focuses on key scientific issues such as the effectiveness of automated machine learning search modeling methods and the optimization of search computational efficiency.

The research proposes an algorithm combining Bayesian optimization and adaptive continuous filtering, referred to as BOASF. BOASF abstracts the Auto ML problem as a multiarmed gambling machine problem and accelerates the Auto ML search process through optimization methods such as adaptive gambling arm fast filtering and adaptive resource allocation. BOASF can support both model selection and super parameter optimization. Experimental results show that BOASF can achieve better prediction performance than the current optimal Auto ML methods for both model selection and hyperparameter optimization tasks with different time budgets.

6. Results of Applying the University Education Management Model

The experimental group was taught according to the lesson plan designed by the experiential teaching model proposed in this paper, while the control group was taught according to the model lesson plan of other teachers, and this study was conducted in these two classes for two months. This study compiled two test papers, administered pretests and posttests to the two classes, conducted statistics and analysis of the test results, recorded real student classroom performance through classroom observation of student feedback, and understood the effects of the experiential teaching model on students’ interest in physics learning and the advantages and disadvantages of the experiential teaching model through interviews with individual students and teachers, to obtain more comprehensive application effect feedback. The results of the pretest are shown in Figure 6.

The mean score of the experimental group in the pretest was 0.58 points higher than that of the control group. In the independent sample t-test, the p-value was greater than 0.05, indicating that the difference between the scores of the two classes was not significant, the basic learning situation of the students before the experiment was comparable, and the control teaching experiment could be conducted. The mean score of the experimental class was 5.36 points higher than that of the control class, and the mean score of the experimental group was significantly higher than that of the control group. The independent sample t-test of the posttest scores shows that the p-value is less than 0.05, which indicates that the difference between the posttest scores of the two classes is significant.

Testing and debugging is to compare the written program with the expected effect or success criteria after testing, and if it does not meet the success specification, they will actively analyze the reasons, conceptualize the missing information or wrong program, and make modifications and adjustments so that the program can run correctly and the program results meet the expectations, and testing and debugging is an important part of programming practice. Students’ testing and debugging of the program cannot be separated from the analysis of computational concepts. Paying attention to students’ comprehensive understanding of computational concepts, guiding students to actively seek help from teachers or classmates when they encounter difficulties, and collaboratively exploring and analyzing the causes can effectively enhance students’ debugging ability to quickly find errors and correct them, as shown in Figure 7.

In the construction of management systems, the average comprehensive index of each management department is 66.35, with 23 above the average index, accounting for 64%, and 13 below the average index, accounting for 36%. In the construction of dedicated business systems, the average comprehensive index of each management unit is 75.56, with 25 above the average index, accounting for 69.44%, and 11 below the average index, accounting for 30.56%, 27 management departments have professional management systems, accounting for 75%, and there are still 9 management departments that have not built professional management systems. In the case of unified certification and portal integration, the average comprehensive index of each management department is 62.92, with 23 units above the average index, accounting for 63.89%, and 13 units below the average index, accounting for 36.11%; 21 management departments intend to build professional information systems in the next 3–5 years to improve the information level of this management department; 15 management departments have no proposed. It has practical significance for realizing the goal of cultivating all-round development builders and successors. Among the proposed business systems, 16 management departments are building management systems, accounting for 44.44%; among the business systems under construction, 16 management departments have completed single sign-on for the built systems, accounting for 44.4%, 5 management departments have achieved single sign-on for some of their business systems, and 15 management departments have not yet achieved single sign-on for their business systems.

The construction of a single sign-on in most management departments is not very good. In the system security situation, the average comprehensive index of each unit is 61.11, with 22 units above the average index, accounting for 61.11%, and 14 units below the average index, accounting for 38.89%, 23 management departments’ management systems have passed the security monitoring, accounting for 63.89%, 11 management departments’ professional systems have not passed the security monitoring, and 2 management departments’ systems have partially passed the security monitoring.

7. Conclusion

To address the problems of high computational complexity and memory complexity of functional dependency discovery algorithms in large-scale data scenarios and huge running time and memory overhead, the study proposes Smart FD, a large-scale distributed function dependency discovery algorithm based on attribute reordering. Smart FD consists of two phases: data preprocessing and function dependency discovery. In the data preprocessing phase, a skew and base-based attribute reordering method is designed, where attributes with low skew and large base are prioritized one by one. In the function dependency discovery phase, a distributed sampling method based on a fast sampling early aggregation mechanism, an index-based validation method, and an adaptive switching method between sampling and validation are used to discover all function dependencies grouped by a given attribute. The innovation of its technology provides new ways and tools for local colleges and universities to carry out educational management work, but at the same time, its imperfect supervision mechanism also brings new challenges to colleges and universities, which is also the new situation faced by colleges and universities in the current stage of educational management work. The results of the teaching practice show that the model can help students to deeply understand the concepts and laws and can better solve physical problems. Through the implementation of this teaching model, on the one hand, students’ comprehensive scientific literacy in physics is improved, and on the other hand, students’ performance can be improved to a certain extent. At the same time, it positively and positively promotes students’ interest and attitude toward learning physics. Using the proposed indexes and system, this study evaluates the level of educational management informatization of a university in an educational informatization pilot province; based on the evaluation results, on the one hand, the proposed indexes and system are modified and improved, and on the other hand, guidance and suggestions are provided for the development of educational management informatization in universities, and demonstration cases are also provided for the evaluation practice of educational management informatization in universities, which promotes the development.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Jitang College, North China University of Science and Technology.