This article first established a university network education system model based on physical failure repair behavior at the big data infrastructure layer and then examined in depth the complex common causes of multiple data failures in the big data environment caused by a single physical machine failure, all based on the principle of mobile edge computing. At the application service layer, a performance model based on queuing theory is first established, with the amount of available resources as a conditional parameter. The model examines important events in mobile edge computing, such as queue overflow and timeout failure. The impact of failure repair behavior on the random change of system dynamic energy consumption is thoroughly investigated, and a system energy consumption model is developed as a result. The network education system in colleges and universities includes a user login module, teaching resource management module, student and teacher management module, online teaching management module, student achievement management module, student homework management module, system data management module, and other business functions. Later, the theory of mobile edge computing proposed a set of comprehensive evaluation indicators that characterize the relevance, such as expected performance and expected energy consumption. Based on these evaluation indicators, a new indicator was proposed to quantify the complex constraint relationship. Finally, a functional use case test was conducted, focusing on testing the query function of online education information; a performance test was conducted in the software operating environment, following the development of the test scenario, and the server’s CPU utilization rate was tested while the software was running. The results show that the designed network education platform is relatively stable and can withstand user access pressure. The performance ratio indicator can effectively assist the cloud computing system in selecting a more appropriate option for the migrated traditional service system.

1. Introduction

With the rapid expansion of the Internet, a new generation of information technology represented by cloud computing and big data processing technology continues to achieve the integration and sharing of various resources, thereby building a new large-scale complex IT system (LSCITS) [1]. In comparison to traditional IT systems, it not only needs to effectively manage large-scale, heterogeneous, and complex infrastructure resources, but it also needs to meet diversified application requirements, particularly application requirements for reliable computing, high-performance computing, energy savings, and emission reduction [24]. At the moment, IoT devices have become the primary source of the rise of Distributed Denial of Service (DDoS) assaults, causing significant disruption to normal data transmission at the IoT network layer. As a result, how to effectively govern network nodes, defend against network threats, and ensure reliable data transfer are concerns that must be addressed quickly in the process of IoT security management [57].

With the further development of Internet technology, mobile edge computing technology has been widely used, and many companies are committed to providing new cloud services through cloud computing systems, such as Amazon Elastic Computing Cloud Platform (EC2), Google Cloud Computing Platform, and Microsoft Azure Cloud Platform [8]. SDIoT controllers typically have large computing and storage resources, allowing them to easily deploy attack defense algorithms that are difficult to implement in traditional Internet of Things and Wireless Sensor Networks (WSN) in the controller and use SDIoT for centralized control to achieve flexible attack response [9]. Many emerging service models have been derived on the basis that cloud computing facilitates big data technology to shield the underlying heterogeneity, such as Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS), making Service-Oriented Computing widely used [1012].

System index evaluation based on theoretical models is critical for achieving reliable, efficient, and energy-saving optimized scheduling management for university network education under large-scale complex systems. However, in previous related research, reliability, performance, and energy consumption indicators are frequently considered separately for analysis, and the reliability-performance-energy (RPE) correlation between these indicators is ignored. Finally, this study proposes a centralized minimal transmission cost routing technique to ensure data message transmission dependability, taking into account the node’s global trust value and remaining energy. When compared to existing methods, the experimental results show that the approach presented in this study effectively increases network packet delivery rate, reduces network energy consumption, minimizes control overhead, extends network life, and ensures perception layer network security. Furthermore, large-scale infrastructure resources present additional challenges for multiobjective optimization scheduling management technologies. This paper conducts a systematic and comprehensive RPE association modeling theory study on two types of typical LSCITS in response to these key issues (that is, cloud computing systems and big data processing systems). Simultaneously, the Bionic Autonomic Nervous Systems are used in the design of the dispatch management system, and the optimized dispatch management technology with comprehensive consideration of reliability, performance, and energy consumption is further studied based on the established correlation model.

The big data platform uses Hadoop, an open source technology provided by Apache. MapReduce and HDFS are the two core components of the Hadoop technology system. But the Hadoop technology system is not only these two parts but also contains many other technologies to form a complete large-scale distributed computing system, such as Hbase, Pig, Hive, Sqoop, and ZooKeeper. The gradual development of these technologies has made big data technology more powerful and more convenient to use.

Du et al. [8] analyzed the correlation between data center network communication energy consumption, network congestion status, and job performance and balanced the relationship between PE constraints through job integration and congestion control; Lai et al. [13] proposed based on the cloud data center queuing network for random queuing performance model. At the same time, the PE correlation analysis is realized based on the voltage parameters of the physical equipment in the model, and finally, the dynamic voltage adjustment technology is used to reduce the energy consumption; He et al. [14] are based on the CPU utilization of the physical server. The PE association model reduces energy consumption by integrating cloud services into fewer servers. At the same time, according to the model analysis, an integration method with minimal performance loss is designed, and a performance computing cloud data center architecture based on VM real-time migration is established, and the model analyzes the potential impact of integration and migration behavior on system performance, and the energy-saving method of the physical server dynamically switching according to the system load has also been studied. It has designed a prediction mechanism based on historical data analysis to predict the system workload. The changes can not only save energy but also meet the needs of system performance.

Badarneh et al. [15] use advanced ASP and database platforms, adopt a three-layer structure model, give basic module design and implementation methods, add multiple roles, and develop and design an online education platform. It will directly affect the efficiency and level of online education management and the level of informatization of online education management. To a certain extent, Huang et al. [16] reduce the coupling between service providers and users and expand new. In the field, Web services are used in the development of online education platforms, comprehensive control and management, and the addition, deletion, retrieval, and update operations of online education management data; for scalability and maintainability, XML-based Web services are developed to improve the network. With the advancement of technical means, the network education management service model is relatively limited and does not meet the development requirements of the times. It is unavoidable that network education management will be digitalized and informationized [1719]. The rapid development of scientific and technical revolutions has resulted in the online education platform, which includes online education statistics and management capabilities. The online education platform, as an important component of digital education, is the guarantee for the achievement of scientific administration of online education and plays an important role [20].

3. The Construction of a University Network Education System Based on Mobile Edge Computing in the Era of Big Data

3.1. Hierarchical Topology of Big Data Technology

Intelligent technology’s perception layer [2123] realizes information perception and collection and acts as a link between the physical, information, and human worlds. The perception layer is used to describe all types of wireless access networks that allow network access services to be provided at any time and in any location. WSN is one of the most widely used access network technologies in the sensing layer of the Internet of Things. It is a multihop wireless network made up of several static sensing nodes that self-organize. The sensing nodes work together to detect, process, and transmit information about the monitored target in the coverage area and then send the data to the observer. The hierarchical topology of big data technology is shown in Figure 1.

There are two sorts of network layers: wired communication networks and wireless communication networks. Wired communication is the primary support of the Internet of Things sector, while wireless communication is an essential component of the Internet of Things’ future development. Telecommunications switching networks, the Internet, mobile communication networks, satellite communications, and other services are all part of the network layer. The telecommunication network and the Internet, for example, are relatively mature and serve as the backbone components of the Internet of Things network layer. The sensing device can be connected to the network layer through an access network or a gateway device.

The application layer sits on top of the Internet of Things, and it enables a wide range of application services by fully utilizing the Internet of Things’ information resources. To achieve precise control of the physical world and scientific decision-making based on sensor data collected by the perception layer, the application layer employs artificial intelligence, cloud computing, big data, and other technologies. The application layer, which has the characteristics of diversification, industrialization, and large-scale, currently implements application services such as smart transportation, smart home, smart medical, and environmental monitoring.

As a simple, lightweight, and XML-based protocol, it is used to transmit and exchange structured data in the network. Service users use the SOAP protocol to request and access Web services. The SOAP protocol can be built on top of HTTP, SMTP, and TCP protocols. The most common is the HTTP protocol, because the HTTP protocol can easily traverse the firewall, so that the implementation of distributed computing on the network can be carried out unimpeded.

3.2. Mobile Edge Computing Process

Mobile edge computing offers real-time radio access network (RAN) information to content providers and application developers, such as network load and user location. These real-time network data can be used to provide context-aware services to mobile users, hence increasing user experience and service quality.

Mobile edge computing assigns new responsibilities to the edge network, allowing it to perform tasks such as computation and services while also giving it more management authority in order to reduce network latency and energy consumption for mobile users. Network operators allow third-party partners to process the wireless communication network’s edge. As a result, new applications and edge services for mobile customers and businesses will be deployed more quickly. The MEC system is typically used to connect a wireless access point to a wired network, but it can also be used to connect a wireless access network to a mobile core network. The MEC server with IT service capabilities, which is built on a common hardware platform, is the heart of the edge computing system. Localized services can be realized by deploying the MEC server on the wireless network’s edge or within the wireless network. Private cloud hybrid cloud services can also be realized when connected to other networks. The MEC system also includes a cloud-based big data environment that allows third-party applications to run on edge cloud data. It can be open to third-party applications and realize other wireless network capabilities related to edge services through the platform middleware on the MEC server.

Figure 2 shows the distribution of mobile edge computing network nodes. Attack defense algorithms that are difficult to implement in traditional Internet of Things and WSN can be easily implemented in the controller thanks to the advantages of centralized management, separation of control and forwarding, and global topology. As a result, the SDWSN controller can provide security functions like trust management, secure routing, attack defense, and access control more effectively. On the other hand, the SDIoT perception layer’s normal operation requires safe and reliable networking. However, due to the control plane’s centralization, SDWSN must collect node topology information on a regular basis in order to construct a global network topology. Given the limited communication capabilities of WSN sensor equipment, multihop transmission of topological messages, open wireless channels, and harsh deployment environments, if an attacker captures an intermediate node and compromises and launches a selective forwarding attack, it will destroy the transmission of control messages and disrupt SDWSN’s normal operation. Furthermore, attackers can take advantage of the working mode of SDN flow table request processing to forge a large number of new messages and launch new flow attacks, consuming a significant amount of bandwidth and computing resources on intermediate nodes, sensing networks, and controllers, and causing network paralysis.

3.3. Applicability Analysis of Online Education

In order to achieve better network education trust evaluation, mobile edge computing uses the security resources of nodes and controllers at the same time. Among them, the SDWSN node is not only a forwarding device but also a sensing device with its own security capabilities, which can evaluate its own security status and provide trust information for the controller to make decisions. Therefore, ETMRM implements local trust evaluation at the node level and global trust evaluation at the controller level. Therefore, the trust in ETMRM can be divided into local trust and global trust. By observing the communication and interaction process with neighbor nodes, the node has gained local trust, which reflects the degree of trust in its neighbor nodes. Global trust is calculated by the controller based on the collected local trust information based on the global trust model, which reflects the trust rating of a certain node in the entire network. Figure 3 shows the applicability framework of online education.

When the network scale is large, solving the INLP problem directly becomes more challenging. To minimize the size of the search space, the nodes in the search network with higher residual energy and higher trust value can be added to the AP candidate subset. Furthermore, given the controller’s extensive computation and storage capabilities, a variety of heuristic techniques can be utilized to tackle this problem. The data application unit, also called the service unit, is a series of application unit modules on the data resource pool. A DAU that can provide services for application systems consists of three parts: unit description, DAU main body, and service interface. For application management and services, it is similar to the building blocks and application program interface calls in the component-based system construction model. According to the different business needs of the application system provide corresponding data application units, such as data function units (DFU) that provide different functions according to different data types, data service units (DSU) that provide services in push mode, data encryption and decryption units (DEU), data authorization calling unit (DIV), data application combination unit (DCU), data visualization unit (DVU), and data processing unit (DPU).

3.4. System Factor Weight Recursion

The selected AP node can always guarantee the safe transmission of control messages when there is no attacker or when the number of attackers in the education network is relatively small. As the number of attacking nodes grows, there is a chance that some compromised nodes will be chosen as APs. When a compromised AP launches an attack to discard control packets, it has a significant impact on the network’s normal operation. Because control messages are more important than data messages in the local trust evaluation model, as a result, the local trust evaluation mechanism of neighboring nodes or other APs will quickly detect such APs in this case. The three major processes of local trust evaluation, global trust management, and global trusted routing are used to implement ETMRM. Among them, in the local trust evaluation mechanism, the sensing node uses the counting function of the SensorFlow flow table to monitor the sending behavior of its neighbors and runs a lightweight trust evaluation algorithm to obtain the local trust value of the neighbor node; the global trust management mechanism is the controller that uses the energy-efficient topology collection process to obtain local trust information and obtains the global trust information based on the global trust model to realize the detection of malicious nodes in the network: on the basis of global trust, the global trusted routing realizes the detection of malicious nodes. Isolate the response and use the global trusted view to build safe and reliable data routing.

Figure 4 shows the weight of the mobile edge calculation factor. The XML bus technology of mobile edge computing is widely used as the main way to achieve message transfer between application units, for example, the EOS mentioned above, which encapsulates the three major data areas of the application through the DOM method of XML. Session, request, and business processing data areas form the data bus area of the entire system. All data is standardized in XML text format, and Xpath addressing mode is used for message transmission. According to the object type and variable name, the traditional interface completes the instantiation of the object when calling the specific interface. It is different from the traditional message passing method based on the object interface. It consists of the session data area, the request data area, and the business data. The EOS bus constituted by the zone has an additional role of an intermediary in the message transmission process of system construction. The position of the component interface in EOS in the bus is relatively fixed. When the EOS platform is running, the interface corresponds to the difference of the messages sent. The content of the location may be different.

4. Application and Analysis of University Network Education System Based on Mobile Edge Computing in the Era of Big Data

4.1. Mobile Edge Computing Data Preprocessing

We present two benchmark schemes, a random edge caching approach and a noncooperative edge caching strategy, to validate the performance of the network education caching strategy based on mobile edge computing described in this research. The material of different process levels is randomly placed on the vehicle and the roadside unit in the random caching approach; in the noncooperative edge caching scheme, the content is directly deployed on the roadside unit using the joint optimization technique suggested in this research. First, the amount of input data for each task directly affects the transmission time and execution time of the task (the larger the amount of data, the longer the transmission time, and the longer the execution time) and indirectly affects user-server-subchannel matching and equipment transmission power allocation (both transmission time and execution time need to be used during the execution of both).

The allocation of mobile edge computing factors is shown in Table 1. Second, the task’s unit data workload has a direct impact on the task’s execution time (the higher the unit data workload, the longer the execution time), as well as an indirect impact on the device’s user-server-subchannel matching and transmission power allocation. In this scenario, the result analysis reveals that the query function has no performance flaws. The middleware server’s CPU usage was reduced to a minimum. The CPU usage of the middleware server did not continue to reach 100% after the scene was stopped. The reason for the failure under this stress was a download resource timeout, which appeared when the scene running user reached 250. Mistakes happen, and transaction response time grows as the number of active users grows.

The saturation of the large data distribution network is depicted in Figure 5. The energy consumption of the SDS algorithm is very low when user saturation is low (25 percent, 50 percent); nevertheless, when user saturation is large (75 percent, 100 percent), the energy consumption of the SDS algorithm grows dramatically. This demonstrates that the algorithm is unsuitable for high-saturation mobile edge computing systems. When user saturation is low, user influence is small, and the SDS algorithm may exert its advantages more effectively. When the user saturation is 25%, 50%, 75%, and 100%, it can be shown that JTOTPA consumes the least amount of energy and performs the best in the majority of circumstances. When each task in the system has the same input data volume but a different workload, JTOTPA may effectively adapt the mobile device’s transmission power and the user-server-subchannel matching scheme, reducing system latency and user energy consumption. JTOTPA’s user energy usage does not demonstrate a purely proportionate or inverse connection with the growth in unit data burden. If the currently available communication or computing resources are greater than the resources required by the task owner, according to the resource provider’s reputation value in descending order, the right to provide resources is allocated to the corresponding resource provider in a certain proportion; secondly, obtain after the optimal resource pricing, in the same way, priority is given to assigning higher resource prices to resource providers with high reputation values. This incentive mechanism can encourage more users to participate in distributed resource transactions, thereby effectively ensuring the reliable operation of the system.

4.2. Simulation of the Online Education System in Colleges and Universities

This paper simulates a sensory network with 100 randomly distributed nodes. In order to replicate the data transmission requirements of IoT communication and ensure the fairness of experimental comparison, this paper uses a pseudo-random approach to generate the source and destination addresses of data packets as an SDWSN for IoT applications. The network monitoring cycle is 30 seconds, the data load is 240 bits, and the data transmission interval is 2 to 25 seconds in order to simulate a network environment with varying communication rates. Furthermore, we used the YOKOGAWAWT310 digital power meter to measure the key energy consumption characteristics of the Texas Instruments CC2530 sensor node in order to simulate the energy consumption of the real SDWSN sensing device. WebTeach was created as a new system database, with data tables such as grade information tables, homework information tables, course information tables, student information tables, and user information tables. The designed network teaching management system includes data tables such as the performance information table scorelist, homework information table worklist, student information table student, user information table use mobile edge computing, and others. When the output power is set to 1 dB, the node’s power consumption parameters, including sending, receiving, and standby, are as shown in the text. Figure 6 depicts the power consumption metrics for the online education system’s approval rate.

When the maximum number of users is 300, 10 users log in per second, and when 20 users log in at the same time, 1 user logs in per second, resulting in a market development strategy management system with a maximum concurrent user number of 300. The system simulates the user’s mixed operation and records the test parameters by implementing multiple continuous operations under a single business and two businesses. We use the filtered data to calculate the learning cost coefficient of each knowledge point using the Bayesian formula; then, we filter out irrelevant knowledge points and use the expectation formula to calculate the average learning cost coefficient of the entire course. This coefficient can be used as a criterion for assessing the quality of a teacher’s instruction. This expectation should, in theory, be greater than 1, with the closer it is to 1, the higher the teaching quality. It is understandable that students only need to watch the video once to learn this information. The utility of the MEC server changes as the pricing factor changes, and the utility of MEC is constantly increasing as the pricing factor increases. In practice, the more users who pay, the more MEC benefits increase. When , the technique employed in this study is more effective than when the computation rate is constant. It is clear that as the pricing component rises, the utility of SCBS decreases. This is because as the pricing factor rises, SCBS will pay more costs to the MEC server, reducing the utility of SCBS. This scheme’s utility is still higher than other schemes when compared to the fixed calculation rate scheme.

4.3. Example Application and Analysis

The network switching equipment used in this article mainly includes the V330 OpenFlow series switches of the OntSwitch 30 switch. Among them, the OntSwitch 30 switch is an open source OpenFlow switch, which integrates computing, network, and storage resources, has the advantage of software and hardware programmable, and can help users quickly build a real SDN network system. The ONetSwitch 30 switch integrates the open source software switch Open vSwitch and supports the OpenFlow v1.3 protocol. In view of its programmable technical advantages, this paper implements the attack detection algorithm in OVS v2.7.2 and transplants it to OnetSwitch SDWSN centralized management, periodic topology collection, flow table request processing, and other working modes make SDN. Control messages such as topology messages and flow table requests of each node of WISE need to be forwarded to the controller separately, which brings additional control pressure to the network. This is because the ERMAS mechanism can effectively aggregate control messages such as topology messages and flow table requests, reducing the number of transmissions of control traffic, and ERMAS delivers all paths to subsequent nodes in the early stage of the network or when data routing requests are frequent. This reduces the number of flow table requests, thereby effectively reducing control overhead. At the same time, the low control overhead also means that when the same number of data packets is sent, the number of control messages generated by ETMRM is smaller, and the control energy consumption is also less.

Figure 7 shows the detection rate distribution of mobile edge computing networks. The normalized entropy curves measured at different detection intervals have the same trend. When , the attack will be detected at 105 S; when , the attack will be detected at 110 S; when , the attack will be detected at 112 S. Therefore, the smaller the monitoring interval, the earlier the attack is detected, and the smaller the corresponding detection delay. However, this adds to the switch’s computational complexity. The manager will theoretically generate a normal distribution function by counting the learning time for the correct answer, and the wrong data will be removed using the confidence interval. False data should also be eliminated when counting the length of learning time for erroneous replies. The left and right confidence intervals of the two may differ slightly and must be assessed alongside actual conditions. Therefore, according to the needs of network monitoring, a larger interval can be used when the network status is stable, and a smaller interval can be used when the network status fluctuates greatly, and the controller can dynamically adjust it according to the security situation of the entire network. Figure 8 shows the fluctuation of the mobile edge computing network status.

After you have gotten the desired result, you can calculate a lower difference. The meaning of variance in the algorithm is the average difficulty of the course. Despite the removal of knowledge points beyond the confidence interval, there are still significant differences in the degree of difficulty. Some knowledge points have a learning cost that is twice or even three times the length of the knowledge point, while others have a cost that is 1.5 times the length of the knowledge point. When comparing the two courses, the final expected value may be the same, but the variance is higher, indicating that some knowledge points in the course setting are too easy, while others are more difficult. As the total amount of task input data grows, the system latency increases, indicating that as the total amount of data to be processed in the system grows, the entire system requires more time to complete the calculation. The fourth set of experiments produced a large system delay in the experimental results of the mobile edge computing algorithm, so the delay results did not have a uniform upward trend. This is due to the mobile edge computing algorithm’s randomness. Certain user-server-subchannel matching methods will also cause greater system delay when the total amount of data in the system is not large.

5. Conclusion

When facing mobile edge computing tasks, in view of the important problem that the task completion time directly affects the actual energy consumption, this paper proposes a college network education that takes into account the ideal task completion time limit, hardware failure, data processing program failure, and other factor model, and the analysis and evaluation of expected task execution time and expected energy consumption are realized through the big data association modeling method. First, this paper implements a comprehensive trust management mechanism to attack nodes by combining the Bayesian-based lightweight local trust evaluation mechanism with the controller’s global trust management mechanism. Following that, the malicious presence in the network was effectively identified and isolated using global trust. An energy-efficient topology information aggregation and collection mechanism is also proposed in this paper. The mechanism is designed to reduce the amount of energy used by global topology collection. To achieve energy-efficient and highly reliable topology collection, integer nonlinear programming (INLP) problems are solved to elect aggregation nodes. When dealing with large data tasks, the complex decision-making behavior of subtask segmentation and redundant execution of subtasks is fully considered, and a method for solving the probability distribution function of random task completion time is designed for this distributed redundant parallel computing environment. The results of the experiments show that the established theoretical model has a significant theoretical evaluation and analysis effect on the best resource allocation strategy for complex computing tasks, the best segmentation of large data tasks, and the formulation of redundant execution strategies.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author does not have any possible conflicts of interest.