Abstract

With the rapid development of information technology, information network has become the fundamental guarantee of social progress. The network is the basic material foundation of the information society. It is ubiquitous, and human activities are highly dependent on it. Education is also inseparable from the Internet. Online learning is emerging as a new form of basic education. At the same time, it needs to be considered that the information received by the sensor is always incomplete, reflecting the information measured in a particular part of the object but not understanding the total content. Therefore, it is necessary to use multiple sensors to identify an object from multiple angles, obtain additional dimensional information about the object, and combine multidimensional information. This paper introduces the design and implementation of a teaching network system for computer network courses, starting from the overall concept of the system, the selection of development methods and the specific implementation plan of the system, and then introduces the specific content of each module in the teaching system in detail. The recognition success rates of five groups of experiments based on multisensor information fusion are 96.9%, 95.7%, 98.4%, 96.3%, and 98.7%, respectively.

1. Introduction

Computer network is a new, very complex, and practical technology. With the rapid development of information and communication technology, the innovation and development of knowledge and technology related to network content are also advancing rapidly. Therefore, education and training must be focused on student learning, knowledge acquisition, and lifelong learning. Knowledge about information networks is the basis for knowledge about communication technology interfaces and can only be tested in a specific experimental setting.

With the wide spread of information networks and the development of related technologies, the need for information network skills at all levels has become very urgent. In school, students need to acquire basic knowledge and skills in order to better adapt to the needs of society in the future. ICT teaching in the classroom should develop a wide range of skills for students. Teachers not only teach pure theoretical knowledge and practical skills, but also fully stimulate their enthusiasm for learning, encourage them to find, analyze, and solve problems, and provide leadership opportunities. Regarding multisensor information fusion, relevant scientists have conducted the following research. Liu B J proposed a new error analysis method to solve the problem of incomplete and inaccurate data in the analysis of a complex system of variables. This approach used a combination of multisensor data based on BP neural networks and evidence-based theory that improves system reliability effectively [1]. Based on the developed collaborative simulation platform, Zhu et al. further proposed a novel control method by fusing the information of multiple sensors. Due to the information fusion of multisensors, the proposed control method has a better capability to guarantee indoor thermal comfort [2]. The method proposed by Wang et al. also used obsolete data from various sensors to gain spatial connectivity and allow data integration. The observations, combined with his proposed two-step fusion spatiotemporal model, can be used to provide a sound foundation for anticipating Geohazards due to halo rocks [3]. Liu et al. presented a method for dynamic obstacle recognition using a combination of multiple sensor data. He proposed a Kalman filter-based approach that combines feedback from BeiDou devices and inertial surveys to acquire the position of farm equipment [4]. Jin et al. used data entry sensors and distributed data mixing structures to design related navigation filters and obtain relative speed and position settings. The results of the mathematical simulation program show that this method can improve the accuracy and reliability of the relevant navigation system of the UAV, and the algorithm proposed by him has been validated [5]. Lv et al. designed a neural network of ambiguous algorithms for obstacle courses based on the integration of multidisciplinary information. He ensured the efficiency and reliability of the algorithm to avoid obstacles in the field of UGV experiments [6]. Xu et al. proposed a particle filter algorithm based on a combination of several sensors. The algorithm combines the smartphone and the internal location and filters the location results with the particle path. Compared to Wi-Fi-based positioning algorithms, the multisensor algorithm improves positioning accuracy and reliability [7]. Shifat and Hur investigated sensor-acquired vibration and current signals to develop a robust framework for multifault troubleshooting of BLDC motors [8]. Ma et al. proposed a cubic integral-based Gaussian mixture implementation of filters with artificial intelligence, extending the filters to multisensor applications. The iterative correction scheme is used for multisensor information fusion. The validity of the approach was proven in a multitarget tracked situation [9]. Wang and Li has proposed new types of measuring instruments, including telescopic sensors and laser range finders. It can provide better real-time accuracy and features for local multisensor monitoring software in real time. Practical data and regulatory values should contribute to further research on page size and have a positive impact on future system design software development. [10]. Subedi et al. introduced a new technology for multisensor evacuation detection. This technique uses a regenerative feedback system in which packet packaging algorithms also benefit from earlier target dynamics. It also shows that the repetitive structure of the study is better than the traditional method because the vector measurement is facilitated by the missing pattern and increases the noise [11]. Hu et al. provided information on data modeling, observing, and changing natural state models, dynamic observation models, and social models. The results of this case study confirm the available options for sensors as sensor configurations and have significant theoretical and practical implications for integrated management of multisensor observations [12]. Based on the BP neural network algorithm and the theory of integrating multiple data, Zheng et al. proposed tuning algorithms for radar and introduced Kalman filters and particle paths to perform nonlinear orbital filtering. Analyzing the simulation based on the measured data, it can be concluded that the algorithm understands the flight path connection and actually improves the flight path accuracy of the task [13]. Yi et al. proposed a new method for monitoring the scattering of multiple targets using polymorphic radar systems. The method he proposed was based on the use of a multipurpose generalized multipurpose density junction to combine queues in a Bayes multipurpose filter circuit. This solution is particularly suitable for connecting rear density sensors [14]. Based on the fusion of feature information obtained by various sensors of reciprocating compressors, Ming and Jiang proposed a fault diagnosis method for reciprocating compressors. The diagnosis of multisource information fusion has high reliability and low uncertainty. The method he proposed can accurately identify the failure of the reciprocating compressor [15]. The above study carried out a detailed analysis of the integration of multidisciplinary information. Not surprisingly, this research has made remarkable progress in similar areas. We can learn a lot from methods and data analysis. However, there is not much research to incorporate interactive knowledge into education, and this technology should be fully utilized in the research in this field.

The novelty of this paper is that it solves the problem that the judgment of the authenticity of evidence is often based on experience, resulting in low reliability of decision-making, and proposes the use of neural network and BP to judge the authenticity. It combines BP neural network with advanced target recognition theory and finally verifies the recognition accuracy of the combined algorithm through simulation experiments. It is divided into five groups of experimental objects to detect the identity recognition of multisensor information fusion.

2. Design Method of Computer Network Education Teaching System

2.1. Multisensor Information Fusion

The general process of human cognition of objective things is to comprehensively use various senses such as sight, hearing, touch, smell, and taste to perceive things in different directions, so as to obtain multidimensional information of things. Then it analyzes and processes the information based on prior knowledge and logical reasoning and finally obtains the judgment and understanding of objective things. Multisensor information fusion is the use of machines to imitate human cognitive processes. It fuses and comprehensively analyzes the information of different dimensions obtained by different sensors according to certain criteria, so as to obtain a more correct judgment on the observation target or situation. Figure 1 shows a schematic diagram of multisensor information fusion.

In the centralized fusion structure, each sensor does not perform operations such as registration and filtering independently. After processing the appropriate analytical data in advance, they are sent directly to the integration center where the subsequent data processing projects are implemented. The amount of data in the fusion center is huge, and the computational overhead is high. This requires relatively high data transmission capability, storage capability, and computing speed of the fusion system. The cost of the fusion system is higher, but there is no information loss, and the system performance is better. In practical engineering, the centralized fusion structure is generally used to process multiple similar data.

In the distributed fusion structure, each sensor has greater independence. Each sensor makes independent decisions based on their own data and then fuses the decisions in the fusion center. Taking track fusion as an example, each sensor detects a local track and completes the tracking of the target. The fusion center performs registration, association, filtering, and other operations on the local tracks to complete the fusion of the tracks. The distributed fusion structure uses the predecision of each sensor to compress the data. Its data traffic is small, its reliability is high, and its cost is low, but it will lead to a decrease in the accuracy of fusion results.

The hybrid fusion structure sends the original data and predecision to the fusion center at the same time, and the fusion center can fuse the original data and the decision vector at the same time, which effectively improves the reliability and accuracy of the system. The hybrid fusion structure integrates the advantages of the centralized fusion structure and the distributed fusion structure, which can not only fuse the original data directly but also fuse the independent decision-making of each sensor. The accuracy and robustness of the system have been greatly improved. The disadvantage is that the amount of calculation is large, and there is a high requirement for the data transmission capacity of the fusion system.

According to information theory, multidimensional information resulting from the combination of multidimensional information becomes larger than the information content of one-dimensional information, which is also the theoretical basis for the integration of multidimensional information.where denotes probability distributions and denotes information entropy:where denotes joint Shannon entropy.where is the number of the data, is Azimuth of the sensor, and is the pitch angle of the sensor.where is the loss function denotes state vector, denotes the system state transition matrix, is the input control term matrix, is known input signal, and is the process noise sequence.where is the initialized constant value.where is the current value of the loss function.where is regression tree. are the corresponding degree of membership and denotes fuzzy set.where is the input signal.where is the average level of conflict. where is the average support of evidence and is credibility of evidence.where is the Euclidean distance.where is the node threshold.where is the output value of each unit.

The basic principles of multiplication technology are similar to the process of processing data in the human brain. It performs an optimal combination of multistage and multispatial data entry and processing of various sensors and ultimately creates a harmonious interpretation of the observation environment. In this process, it is necessary to use complete data sources for monitoring and normal use. The ultimate goal of data collection is to gather useful information by gathering data from various levels and factors based on the individual observation information obtained by each sensor.

Good theoretical foundations and integration algorithms need to be improved. Because most integrated technologies occur in specific applications. We need to establish leading integration standards for practical issues and create effective data integration systems. If there is a system and model of complete theoretical integration, the blindness of integration technology can be avoided. Algorithms for asynchronous data mixing, twelve neurons, data entry, error diagnostics, self-mapping, neural data input, etc., are all new methods to keep in mind.

The information of the information fusion system is mainly composed of a priori database, manual input information, and multisensor sensing information. In practical engineering, the detection information of multisensor is the most important information source. The information detected by the multisensor can be divided into three categories: (1) Redundant information refers to the repeated detection of targets by multiple sensors to obtain a large amount of homogeneous and repeated information. It includes not only the information detected by multiple sensors on the same target attribute but also the information obtained by a single sensor repeatedly detecting the same target attribute for a period of time. Redundant information is not redundant and useless information. It can offset all kinds of interference in the process of information transmission, reduce uncertainty, and improve the reliability of the system. (2) Complementary information refers to the multidimensional information obtained by multisensors detecting the target to be detected from different angles and different characteristic attributes. Each dimension of information is a description of a certain side of the object to be measured. The combination of multidimensional information can provide a complete description of the target. (3) Collaborative information refers to the information that a single sensor cannot obtain by its own capabilities alone but needs to work in conjunction with other sensors. It is divided into two cases, one is that a sensor needs to obtain certain information based on the detection information of another sensor, and the other is that a variety of sensors cooperate with each other to complete the information acquisition at the same time. In this way, the information fusion system is organized, and different sensors get different divisions of labor, which can greatly improve the efficiency of information fusion.

The data filtering process for the Kalman process is a basic expectation and correction. It is not only a simple and unique algorithm for multimedia data integration technology, but also a very useful operating system. KF is divided into two types: Distributed Kalman Filter (DKF) and Extra Kalman Filter (EKF). DKF can be completely decentralized on data collection, but EFF can certainly overcome the effects of data processing errors and instability in the data collection process.

The information fusion process is usually divided into the following steps: It uses multiple sensors for signal acquisition, which include both electromagnetic and nonelectrical signals. It first converts nonelectrical signals into electrical signals. It then converts the electromagnetic signal and the converted nonelectrical signal into a digital signal through a converter. Information preprocessing includes operations such as filtering noise, removing outliers, and filling missing data. The feature vector of the target is extracted, that is, the feature attribute that can accurately express the target information is extracted according to certain rules. According to different actual scenarios, it uses the corresponding algorithm to fuse the information and judges the result according to the established judgment rules. Figure 2 shows the information fusion process.

Data collection refers to the use of multiple sensors to collect data from measurable objects and to integrate raw data directly collected. It then extracts the object attribute vectors from the mixed data and finally extracts the results of the analysis to the attribute vector data. The data mixing sensor detects the same measurable signal properties. The data obtained are all the same and the combination of data rules does not fit into different data. This is also the setting of the data layer collection. Combining different data requires an attribute layer and a later decision layer.

Data layer mixing is the mixing of raw data collected by sensors. The aggregate data is the most reliable and comprehensive. Thus, the results of data layer integration are often more accurate than those of functional layer integration and integrated layer integration. However, there are also restrictions on integrated data laws. This is due to the following factors: the combination of low data processing, raw data in large quantities, and sometimes they can be said to be large. This will definitely lead to a significant increase in the number of calculations prior to merging. Integration takes a long time, and real-time system operation is not guaranteed. Most of the raw data seen by the sensors is incomplete and inaccurate. In this case, if we want to achieve the right goal of discrimination, the integration system must have great power and the ability to correct errors. Data layer merging can only be done for the same data, but identical data cannot be merged, and there are limitations in practice. Due to the large amount of data, the sensor detects the original data, so the fusion system cannot determine all the error information. It also does not guarantee the reliability of all data, so interference can increase.

The weighted average fusion algorithm is relatively simple and easy to understand. The basic idea of this algorithm is to multiply the data detected by multiple sensors by their corresponding weights, and then accumulate and sum them up, and the obtained summation result is the basis for the decision. It can be seen that this algorithm requires a small amount of computation and is very simple to implement. The advantage of the weighted average fusion algorithm is that the real-time performance is better.

2.2. Computer Network Education and Teaching System

Computer network is a complex network system. It uses communication links, interactive equipment, and appropriate network protocols to connect some independent and scattered computers to achieve specific functions. In this network, no computer has complete control over the other computers, and each computer is autonomous. However, these computers are not independent of each other and can freely use information resources on the network. The communication subnet and the user source subnet together form a computer network. The former is responsible for transferring data between computers, and the latter is responsible for processing data and providing information resources to computer networks. In a figurative sense, a computer network is a bridge between independent computers, enabling them to freely exchange information.

The computer system enables data transfer. Telecommunications is a new form of communication created by the integration of communication and information technology. Depending on the mode of transport, wired devices include data transmission and wireless data transmission. But they connect all data centers and computers through the transport network so that data centers in different locations can share software, hardware, and data sources.

The unique structure and function of a computer network makes it unique. Computer systems consist of a large number of computers, often geographically distant. In a computer network, resources can be shared. But almost every computer is independent of others, especially how it works. Without communication means that there are transmitters and switches, and the connection cannot connect the computer to the computer network, so the computer network cannot exist independently. The interoperability between different types of computer system processes in computer networks puts forward higher and higher requirements for computer networks, requiring interoperable processing and resource sharing through communication facilities.

Computer network reliability design directly faces two groups of users and operators. These two groups have different requirements for computer network optimization design from their own perspectives. The former hopes to obtain a better user experience and services, while the latter hopes to derive more economic benefits from the perspective of cost control. However, for operators, users are the starting point and end point of consumption. Only through high-quality services can we attract more users and, ultimately gain more profits.

A computer network consists of three main components: consumer devices, transmitters, and network applications. Consumer devices consist primarily of terminals and various servers. Most network devices have the following key components: network infrastructure, communication equipment, network services, and network components. Most switches have a seed switch, a node, and a contact link. Web applications consist primarily of network operating systems, user applications, various network protocols, and network managers. Figure 3 shows the execution flow of the computer network reliability intelligence algorithm.

The overall architecture of the education system implements modules that meet system requirements. It takes full advantage of the idea of high coherence and low interdependence dividing different functions into independent functional modules. This structural design divides the system into several independent modules. It has the following important advantages: the system is easy to deploy and upgrade, with different people sharing the work. The interdependence between modules is low, which is convenient for interconnection and coordination among staff. The reliability of the system is high, the interaction between modules is less, and the probability of failure is low. Based on the advantages and characteristics of the above system, the system is divided into several modules, such as registration module, student management, e-learning, e-self-check, e-chat, and video training. These sub-modules can be promoted to the next level depending on the complexity of the function. Figure 4 shows a block diagram of the structure of a specific system.

Components of a computer network: (1) Computer systems: workstations (terminals or clients, usually computers), servers (especially high-performance computers). (2) Network communication devices (communication devices, ballasts, and transmitters): network cards, network cables, hubs (switches), switches, routers, etc. (3) External network equipment: high-performance printers, large hard disks, etc. (4) Network applications: network operating systems such as Unix, NetWare, and Windows NT. Client interface applications (including applications based on DOS, Windows, and Unix); Network manager, etc.

Users of the computer network teaching system are allowed to register, and only authorized users can access the various functional sub-modules of the system. Access to each functional module is based on the user’s normal login credentials, and each module restricts the user’s rights according to the services provided. This structure is designed to ensure the functionality and safety of the system. The main threats in today’s computer network teaching system are as follows: unauthorized access. Unauthorized access is the unauthorized use of network or computer resources, for example, intentional bypassing of system access control mechanisms, abnormal and inappropriate use of network devices and resources, or unauthorized extension of higher privileges, bypassing access rights, and other ports for sensitive data. Illegal use mainly takes the following forms: forgery, identity theft, unauthorized access to network systems for illegal activities, unauthorized operations by unauthorized users, etc. Data loss or leakage refers to the intentional or accidental loss or leakage of sensitive data. It usually occurs during transmission. For example, hackers intercept sensitive data by eavesdropping or snooping on networks, or obtain valuable information such as user IDs, passwords, and other important data by analyzing parameters such as data flow, data rate, frequency, and communication time. It steals confidential information by establishing secret tunnels.

Computer network topography refers to the actual installation of the connection of hardware and devices in a computer network. In a symbolic sense, this structure is the framework of the entire computer network. It has a significant impact on the stability of the computer network. Computer network structure can meet the expected standards in computer network design and planning to best meet the user needs of communication.

The topography of the computer network must meet the following requirements. First, the structure of modern buildings is very complex, and the network spatial science must adapt to the communication environment between buildings. Second, network design must take into account the construction and ease of network construction so that it can be implemented at a low cost. Third, careful choice of computer connection should fully take into account the actual needs and costs. Fourth, network design must take into account full compatibility with devices from different manufacturers. Fifth, in the process of geographical design of computer networks, repetitive demonstrations must be carried out.

3. Design Experiment and Analysis

The fusion object of feature-level fusion is the feature vector of each sensor. Feature-level fusion does not directly utilize raw data. In feature-level fusion, the extracted feature vector determines the performance of the fusion, and the features extracted by each sensor are required to fully characterize or reflect the nature of the data and target. Feature-level fusion realizes distributed processing of data by preextracting features from each sensor. It reduces the requirements for the communication capability of the fusion system, and the real-time performance of the system is also improved. However, due to the lack of access to the original data, some subtle information is ignored. This will cause a certain loss of information, which will affect the final fusion result. Figure 5 shows a schematic diagram of feature-level fusion.

The proposed feature-level fusion is simulated and tested, and different distances are selected to conduct 2,000 experiments, respectively. The simulation results obtained are shown in Tables 1 and 2, respectively.

As shown in Table 3, the basic probability assignments were determined after the three sensors detect the target.

The evidences with high conflict degree are synthesized, and the synthesized results are shown in Table 4.

To avoid chance, ten datasets were taken for testing. Each dataset is already labeled with categories. After constructing the feature set, the features are fed into the classifier to learn the model, and eventually the learned model’s performance is proven on a test set. Figure 6 shows the target recognition results. It can be observed that the target recognition with the highest accuracy is based on the fused feature set, and the higher order features obtained are more refined, with refined higher order features, the best results are obtained.

The test samples obtained 4 groups of values through 4 groups of BP neural network. It normalizes the value obtained by each BP network, and obtains the basic probability assignment of this piece of evidence to the proposition. The basic probability is assigned as shown in Figure 7. It can be seen that after standardization the data, the value between [0–1] and the number of nodes in the input layer of the BP system used is the number of attribute vector elements. The number of nodes in the output layer is the number of objects to be identified.

Using the above six methods, four groups of evidences with low-conflict degree were synthesized, respectively, and the synthesized results are shown in Figure 8.

It can be seen that the Ya method assigns 86% probability to the unknown proposition, so no judgment can be made. The Lp and Lz methods are the same as in the above examples, and the conclusions given are not very clear. It needs the support of more evidence to make a clearer judgment. The DS evidence theory method, the Mu method, and the method in this paper all give high-precision judgments, and we can see the superiority of the classical evidence theory in synthesizing low-conflict evidence. Of course, the decision accuracy of the algorithm in this paper is also very close to the classical evidence theory.

Before accessing the user information database to authorize the user, it must first authenticate the user, and the identity authentication system returns the unique identifier of the user to the authorization system. Figure 9 shows the user identity and authorization process of the computer network education teaching system.

It takes 5 groups of students with different numbers as experimental objects. As shown in Figure 10, the number of students in each group and the number of successful detections are shown. It can be seen that the number of successful detections in this paper is 248, 314, 428, 237, and 379, respectively.

4. Discussion

The teaching of computer experiments is widely administered by teachers using technical methods and online courses. Teacher guidance is limited and independent student guidance is used. Therefore, it is becoming increasingly important to make the goals of e-learning clear. Based on the computer network curriculum and the nature of computer network experiments, it is concluded that the overall purpose of computer network learning experiments is to improve students’ computer skills. On this basis, in accordance with the criteria and definitions of scientific literacy, it divides the experimental learning objectives of computer network courses into three dimensions: experimental knowledge and skills, experimental processes, and behavioral, experimental, emotional, and value methods.

Feature layer fusion belongs to the fusion of intermediate layers. Each sensor first detects the target information and then analyzes and extracts the original features of the detected target (such as the type of radiation source signal, geometric size, and geographic location). It is expressed in the form of a vector, and finally, the feature data extracted from each sensor information source is sent to the system fusion center, and the fusion center centrally fuses these data and makes the final judgment. Compared with data layer fusion, feature layer fusion is no longer dealing with a large amount of original data but part of the data extracted according to the target core features, so the amount of data is greatly reduced. This can reduce computational complexity and facilitate real-time processing. However, this mode only fuses feature vector information and discards nonfeature vector information, which also reduces the fusion performance. Feature layer fusion is a fusion mode between the data layer and the decision layer, which is relatively more flexible and has a wider range of applications. Commonly used algorithms are weighted average method, neural network, and rough set theory.

It proposes a method to determine the basic probability distribution using BP neural network. BP neural network has strong nonlinear mapping ability, which can map the internal correlation of observation data. It feeds back the observations through the BP neural network to obtain an accurate and reliable probability distribution. It combines BP neural network and advanced evidence theory for object detection, and finally realizes the correct detection of all objects. Compared with other algorithms, its accuracy rate is higher. Experimental results show that neural networks can reliably obtain probability distributions and that advanced proof-theoretical algorithms are effective in object detection applications. Experimental results show that neural networks can reliably obtain probability distributions and that advanced proof-theoretical algorithms are effective in object detection applications. The larger the distance, the more outlier the evidence is, and the less weight is given. On the contrary, if the evidence is more consistent with the evidence center, the greater the weight given, indicating that this piece of evidence is very credible. Finally, it compress the algorithm in this paper with the D-S evidence theory and related improved algorithms through numerical examples. Experimental results show that the algorithm can effectively handle weak and strong contradictory evidence, and the fusion performance is better than other methods.

5. Conclusion

Combining information from multiple sensors is also known as information fusion. Information fusion technology was first used in the military field, where the U.S. military used it to process sonar signals. With the rapid development of communication technology, electromagnetic technology, sensor technology, and information technology, data fusion technology plays a very important role in many important fields. In this paper, the proposed feature-level fusion is simulated and tested, and different distances are selected to conduct 2,000 experiments, respectively, and the simulation results are obtained. Using the proposed computer network education and teaching system, the number of students in each group and the number of successful tests are tested. User identity and authorization process. Due to the large scale of experimental learning and limited time and research environment, the needs of learners have not been fully understood, and many impacts have yet to be developed and explored, and future research should be conducted in this direction.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.