Computer network, as the basic course of teaching information majors in colleges and universities, in the process of teaching and learning, shows the characteristics of rich content, abstract theory, and difficulty understanding. This requires us not only to pay attention to theoretical teaching but also to pay attention to experimental teaching in the study of this subject. The current computer network experiment teaching mainly takes the form of computer room as the teaching location, which consumes a lot of manpower and material resources. That computer network experiment teaching based on simulation has lost the practical significance of network teaching. Through the analysis of the characteristics of the experimental teaching of computer network courses, this study studies and designs a set of computer network experimental platforms based on virtualization, aiming at the deficiencies in the existing experimental teaching of computer network courses. When the thread pool is 1, 2, 3, and 4, the average response time of the system is 324873 ms, 279309 ms, 227300 ms, and 221670 ms, respectively.

1. Introduction

With the rapid development of information technology and the continuous enhancement of social competitiveness, enterprises have put forward more and more technical requirements for information technology talents, especially students who study computer software or network communication. In addition to managing basic network operations, they must also be able to configure and manage network hardware and software to achieve their high-quality network management goals. Traditional higher education institutions have long been unable to meet the information retrieval and management needs of students.

The virtual computer network experimental teaching platform integrates the advantages of electronic teaching. It solves the problems of material shortage and multiteam experiments. It avoids the loss of materials in the experiment and the rapid development of network equipment and saves the experiment cost. It has the characteristics of fast deployment, low cost, and practical management. Students can conduct simulation experiments in a relatively safe and free environment, overcome the limitations of time, space, and resources, and acquire relevant knowledge more effectively. Teachers and administrators can more easily supervise the experimental learning process and guide students.

Regarding signal processing, related scientists have done the following research: Lamare [1] presented signal processing challenges and future trends in the field of large-scale systems and details key application scenarios. He examined transmitter and receiver processing algorithms. Simulation results illustrated the performance of the transmitter and receiver processing algorithms in the scenarios of interest [1]. The purpose of Butterfield et al. [2] was to investigate signal processing techniques for quantifying leak flow using vibroacoustic emission monitoring. He had deployed and evaluated many alternative signal processing techniques. He developed another model for buried pipelines. He found that good leak flow quantification can be produced by using vibroacoustic emission [2]. Luo et al. [3] proposed a linear conversion solution to resolve the uncertainty of speech-to-color conversion in applying one of three speech features to control LED color. The results showed that the signal processing for hue-to-color conversion is effective [3]. He et al. [5] did not design an annoying stress management approach based on graphical signal processing. He used simulated results from two real house sizes. This demonstrated the competitiveness of the GSP-based method compared to the hidden Markov model-based and decision tree-based methods [5]. Liu et al. [6] encouraged unauthorized using random access systems to overcome the challenge of large-scale IoT access. He highlighted a number of key market practices that facilitate the implementation of unlicensed policies. Its focus was on advanced compressed sensor technologies and their use for efficient detection of active devices [6]. Maheswari and Umamaheswari [7] provided a detailed analysis of the terminal signal processing algorithms used in vibration analysis. He was trying to evaluate recent studies on nonlinear unstable signal processing algorithms, especially those that can be applied to variable wind turbine speeds [7]. Chakravorti et al. [8] did some calculations here to determine the target parameters. Depending on the complexity of the significant overlap values in the different noise models, an ambiguous estimation structure was included in the classification of ambiguous events. It had been shown to be effective in most classes [8]. To further distinguish between respiration and heart rate signals from a mixed-oscillation phase signal when the respiration signal has significant harmonic and varying frequencies, He et al. [9] developed a new method for recovery and separation based on a two-variable minimum mean square filter [9]. The goal of Wu et al. [10] was to develop an efficient method for processing noisy wave signals to detect damage and reduce noise. He used an efficient Bayesian learning algorithm to design a dictionary based on noisy signal data. This confirmed the effectiveness of his proposed method for detecting damage using pattern information and signal attenuation [10]. Liu et al. [11] introduced compressed sensing as a processing framework and then proposed a signal compression feature extraction method. He used it to extract compressed features directly from compressed sensing data and demonstrate its energy-preserving properties. He proposed a method to directly extract compressed features from compressed sensing data [11]. Paisana et al. [12] provided the necessary signal processing technology to implement time division to reduce radar subtraction and optimize spectrum. This method directly covered conventional radars operating at short distances at the same frequency. It also allowed other users to transmit no more than the specified radar level [12]. Ahmad et al. [13] examined common signal analysis methods in protein-coding areas based on common terms. He also stressed the importance of the genetic code background in all computer solutions for identifying protein-coding regions. Data processing systems for identifying protein-encoding sites that based on nucleotide properties can significantly block signals [13]. Henry et al. [14] proposed a new network method of signal processing. It was valid for both design and real-time calculations to meet IoT requirements. Various signal processing tasks can be performed in a prism network, thus providing a useful toolkit for a wide range of cognitive and surveillance applications [14]. The problem with Bone et al. [15] was the identification of the hidden properties of the system that controls the body’s signals. He discovered through new signal processing and machine learning in large-scale multimodule data. After performing behavioral expressions derived from signals, machine learning was used to express the state of mind to support people as well as to make autonomous decisions [15]. The above studies have carried out a detailed analysis of signal processing. It is undeniable that these studies have greatly promoted the development of the corresponding fields. We can learn a lot from methodology and data analysis. However, there is relatively little research on signal processing in the field of smart manufacturing. It is necessary to fully apply these techniques to research in this field.

This study analyzes the requirements of virtual network simulation learning platform and proposes the overall framework of virtual network simulation learning platform. It describes the specific features of this module and discusses the platform’s development environment and basic technical tools. It demonstrated the execution interface of the main function modules of the system. It tested the system using examples and test tools. It also analyzes the performance of the system function and the load-bearing capacity of the system according to the operating state of the system.

2. Design Method of Education and Teaching Platform

2.1. Image Signal Processing

The actual scene to be processed by the computer must be separated from the actual light, processed in many different ways, and finally represented as a series of matrices. This process is called the digitization of the image signal. Digital images are processed and stored in basic pixel modules. One pixel of a black and white digital image is represented by one value. One pixel of a color image is represented by three values. The most common example of image processing is digital video cameras. Figure 1 shows the steps of image processing.

Signal processing is a general term for processing various types of electrical signals according to various expected purposes and requirements. The processing of analog signals is called analog signal processing, and the processing of digital signals is called digital signal processing. The so-called “signal processing” is the process of processing the signal recorded on a certain medium in order to extract useful information. It is a general term for the process of extracting, transforming, analyzing, and synthesizing signals. In order to make use of the signal, people must process it. For example, when the electrical signal is weak, it needs to be amplified. When mixed with noise, it needs to be filtered. When the frequency is not suitable for transmission, modulation and demodulation are required. When the signal encounters distortion, it needs to be equalized. When there are many types of signals, identification and so on are required.

The most basic contents of signal processing are transformation, filtering, modulation, demodulation, detection, and spectral analysis and estimation. Transforms include Fourier transforms, sine transforms, cosine transforms, Walsh transforms, and the like. Filtering includes high-pass filtering, low-pass filtering, band-pass filtering, Wiener filtering, Kalman filtering, linear filtering, nonlinear filtering, and adaptive filtering. Spectral analysis includes the analysis of deterministic signals and the analysis of random signals. Usually, the most common research is the analysis of random signals, also known as statistical signal analysis or estimation. It is usually divided into linear spectral estimation and nonlinear spectral estimation.where is the position of the pixel, is the image number in the sequence, is the actual scene illumination, and is the exposure amount and exposure time.where is the value of the pixel point.where is the unknown quantity.where is the number of pixels and is the number of impressions.where is the restriction.where is the value of illuminance.where is the brightness value of the pixel.where is the weighting function.where is the position of the pixel, is the image serial number, is the number of images with different exposures, is the pixel after synthesis, and represents the pixel values.where is the brightness value of the pixel and is the compressed image pixel value.where is the average logarithmic brightness.where is the decay function.where determines the range of attenuation and determines the degree of attenuation.where is the minimized objective function.where represents the user-adjustable parameters.where is the exposure ratio.where is the segmented image fusion strategy.where is the threshold area function.

Figure 2 shows the schematic diagram of the digital signal processing virtual experimental platform.

In the field of telecommunications, the most typical application of digital signal processing is image coding and compression. Regardless of still images, moving images, or even television images, the amount of digitally encoded data is very large. For high-quality transmission of them, it is generally necessary to compare them to 1/10–1/100. Various coding methods, as well as the so-called wavelet transform method and fractal signal analysis method, have proposed feasible schemes for high compression ratio TV coding.

The system interface module mainly provides the interface framework of the client. It mainly designs a simple and clear interface from the user’s friendly point of view. It provides the most basic experimental components in digital signal processing experiments. According to the characteristics of digital signal processing experiments, the basic components are mainly divided into three types: signal source generator, signal basic operator, and signal filter. These basic components lay the foundation for building a digital signal processing experimental workflow.

The component management module mainly provides management functions for components. It includes four submodules, namely, a component query module, a component addition module, a component deletion module, and a component modification module. The component query module is mainly used to obtain the relevant information of the component from the database on the server side when the user logs in. The component adding module is to write the information submitted by the user and the information automatically generated during the registration process into the relevant table in the database when the user registers the custom component using the component registration technology. The component deletion module is used when users delete custom components. It deletes the component information from the database table and deletes the component entity from the server. The component modification module is used for the user to modify some variable information of the component, such as the function description information of the component.

The experimental module is the core module of the system. It includes four submodules, namely, the experiment building module, the experiment running scheduling module, the experiment result display module, and the experiment management module. The experimental building module mainly solves the two problems of component visualization and component connection. The experiment operation scheduling module is responsible for executing the experiment process. In the general experimental process, both local components and service components are included. In order to improve the response speed of the experiment, it is necessary to minimize the number of network communications and traffic when scheduling execution. According to the characteristics of digital signal processing experiments, a targeted graphical interface display component is developed. The experiment management module mainly provides the functions of reading and saving experiments.

Signal processing is the foundation of communication theory and technology. Mathematical theories include equation theory, function theory, number theory, stochastic process theory, least-squares theory, and optimization theory. Circuit analysis, synthesis, and computational electronics form the technical basis. Signal processing is closely related to modern pattern recognition, artificial intelligence, neural networks, and multimedia processing and combines theoretical foundations with technical applications. Consequently, signal processing is a field with complex mathematical and analytical foundations and offers a wide range of practical technical opportunities.

2.2. Computer Network Virtual Experiment Education and Teaching Platform

The virtual test system on the computer network should be as realistic as possible. It is necessary to simulate the presentation of objective events, so that users feel that they are participating in learning and show greater interest in learning. The system should make full use of the combination of reality and imagination, provide virtual simulation facilities suitable for experimental needs, and overcome the lack of real hardware or network environment. The test system should have the following characteristics: (1) the simulated network equipment has logical functions and network communication protocols, which provide typical experiments and experimental guidance. (2) The network topology can be dynamically changed during the experiment. In the same kind of software, the vast majority of experimental procedures are based on fixed network topology. Once the user builds the network topology of the experiment, it cannot be changed, and the experiment can only be carried out on this basis. The system supports users to modify the network topology at any time. For example, it connects a new device to the network during the course of an experiment. (3) The system should support cross platform and not be restricted by operating systems and browser types. In addition, in order to meet the needs of distance education, the function of online experiment should also be provided. Table 1 lists the comparison of computer network simulation software.

The main user interface of the system is divided into four main areas: the device bar, the property bar, the test area, and the information bar. Each area corresponds to an interactive function. The design of the device bar is mainly to provide users with a variety of devices to choose from. The toolbar contains four main device categories: computers, hubs, switches, and routers. Different models are listed in each category. Users can select the devices they need and drag them into the test area for testing. When the user selects a device from the toolbar, information about the device’s capabilities is displayed in the device properties window. This way the user can see what the device can do. The experimental area allows the user to draw the device, build test circuits, perform experimental manipulations, and manipulate the device. This area can consist of several windows. Building the experimental circuit is a window without a close button in the upper right corner, while the operating device is a window with a label and a close button. The experimental area should support functions such as dragging devices, connecting circuits, disconnecting devices and circuits, and right-clicking on devices. The frame information area contains device and test status information. The frame mainly contains the following information: the currently selected device, the right-click menu frame, the current function, and some functions not supported in GUI mode.

From a hierarchical point of view, the experimental system consists of three main functional layers: the design system layer, the simulation framework layer, and the simulation performance layer. The machine layer is mainly responsible for providing basic design, general interface design, basic data acquisition, data pump execution, data pump invocation, and general initialization and destruction functions. The simulation framework layer provides a general framework for various simulation systems. This framework is implemented under the machine layer, and the simulation framework layer is mainly responsible for the execution of the simulation process.

The main objectives of the system are as follows: (1) each functional module (design, user operation, kernel simulation, initialization, disassembly, etc.) should be as abstract as possible. The completion of one function module will not affect the execution of other modules, and changes in other modules will not affect the function module. (2) It provides a powerful way to interact with data so that programmers can focus on the actual execution of functions, rather than the interaction between different data (graphs, input data, and simulation algorithms). (3) It minimizes the coupling between different modules in the framework. Figure 3 shows the system flow diagram.

Administrators can participate in the management of individual system modules. The functional requirements for administrators are as follows: user management involves adding users, deleting users, changing user passwords, and setting user permissions. Course management involves scheduling sample courses, scheduling sample courses for students, and importing sample courses. Information management involves retrieving information and deleting information. Figure 4 shows the system administrator use case diagram.

Teachers, as ordinary users of the system, are mainly responsible for sharing information with students and using the system to enable students to obtain the information they need. The functional requirements are as follows: user management involves changing personal information and changing password. Course management involves viewing courses. Information management involves retrieving information and replying to information. Score management involves setting scores and retrieving test scores. Exam administration involves downloading score files. The teacher diagram is shown in Figure 5.

When establishing a management system for information management and user functions, the system is first evaluated. It determines the roles of the user and the platform master, then determines the relationship between the database and the data table, and creates the database. Finally, according to the principles of system efficiency and reliability, a system development platform is created. When developing add-ons, it should consider installing a web browser emulator on the client. It examines the search rules of other applications and conducts research based on the user’s situation to determine how to apply those findings. For experimental data recovery, the recovery method needs to be designed by observing the properties of the output file loaded by the client. Finally, it evaluates the statistical methods of the collected data based on the student’s understanding of data point management to determine a comprehensive approach.

The computer network virtual learning platform is a further developed virtual learning management system. The requirements for stable system operation and the requirements for the simulator are different. However, if the experiment needs to add some functionality or be compatible with other emulators, a platform is needed for better accessibility. Therefore, a shared system is an intermediary that facilitates system updates. It can both meet the needs of the current industry and provide a venue for future updates. The system has powerful functions, has a simple and clear interface, is easy to learn, and is easy to expand and maintain. They can bring an incredible experience to users, increase students’ interest in using them, and improve learning efficiency.

In this system, the relationship between teaching management and simulation is standardized and standardized. It integrates efficient information sources and provides high-quality and effective information services for teachers and students in the whole school, such as teachers’ performance inquiries and students’ performance inquiries. Verifying user credentials and confidential information enhances the security and confidentiality of the virtual learning platform. The main purpose of the experimental learning forum is to play the role of managing and maintaining students’ personal information, textbook information, course information, student achievement information, and learning and experimental links.

The network packets between them need to be real packets. This is a fundamental difference from the existing computer network simulation system. With this mode, users will not be affected by the system in the analysis of network packets like a simulated system.

The system requires front-end and back-end communication, and the user submits the topology map through the system and returns the submission result. If the web page is loaded in the traditional way, the entire web page will be refreshed after submission. This will interrupt the user’s operation and seriously affect the user experience. Moreover, the refresh of the entire web page will increase the amount of interactive data and affect the performance of the system. So, the system needs an asynchronous request scheme. Not only will this improve the user experience, but also asynchronous interactions will improve the server’s responsiveness.

3. Design Experiment of Educational Teaching Platform

It compares three image signal processing data paths. The content of comparison includes four aspects: consumption quantity, number of reads and writes, redundant data ratio, and control complexity. The comparison data are listed in Table 2.

To verify the performance of the image signal processing algorithm, it compares the proposed algorithm with several existing popular algorithms. Table 3 lists the performance data of the test images under various algorithms.

It uses software simulation to simulate the algorithmic complexity of the image signal processing algorithm. The experiment evaluates the computational complexity of the algorithm. It takes two methods to synthesize multiple sets of multiple exposure sequences with different resolutions. The results are listed in Table 4. It can be seen from the table that the method proposed in this study has a great advantage in processing efficiency. Especially when the image size is not large, the processing time of the two processing methods is very different.

The information system and learning platform of virtual experimental education are divided into a learning management system and experimental unit. Most educational management units include user management, course structure, achievement surveys, and communication of information. The experimental unit is mainly responsible for managing student affairs and examination materials. Teachers can request course information, test scores, and personal information through the system. The general task of the system is to install, standardize, and automate the system on a virtual machine network platform. As shown in Figure 6, it is the overall module design of the system.

There are many types of virtual machine images. It includes client mirroring, application server mirroring, router mirroring, etc. It is easier to test the limit on how many virtual machine images of a certain type can run on a host. But when there are many types of virtual machine images, it can be difficult to test the limited number of their combined operation.

Such a configuration not only guarantees the performance requirements of users but also allocates them the minimum resources to ensure that the host can support more virtual machines running simultaneously. It runs a small program (such as a network packet capture program in each virtual machine) to simulate the normal use of the user. In this way, the virtual machine will occupy a certain amount of memory. Figure 7 shows the maximization test results. When the number of virtual machines reaches a certain number, the virtual machines start to be slightly stuck.

The two main operations of users in the system are to submit resource requests and release resource requests. The request for submitting resources includes operations such as starting the virtual machine, configuring the network card of the virtual machine, and resetting the virtual machine in the background. When the resource request is released, the background needs to reset the virtual machine, configure the virtual machine network card, and suspend the operation and other operations. Therefore, the response speed of the background of the system is tested by two operations before and after the application of the thread pool. Figure 8 shows the test results.

As can be seen from the figure, after the thread pool scheme is adopted, the speed is greatly improved whether it is submitting resources or releasing resource requests. It is even several times faster when submitting applications for desks and desk resources. It can be seen that the thread pool can make full use of the multicore advantage of the server and improve the concurrent processing capability. Through this scheme, the system improves the processing performance, reduces the user’s waiting time, and improves the experimental efficiency.

In the case of using the thread pool, the thread pool capacity is too large or too small and may affect the performance of the system. In a certain hardware and software environment, the appropriate thread pool size configuration can handle more requests. The system neither should allow requests to wait in a queue when it has the capacity to process them nor should allow more requests to run than the system can manage. Therefore, by configuring the thread pools with different sizes, the above methods are used to test the system response time. Figures 9 and 10 show the test comparison results.

Since the host can process a limited number of commands concurrently, the system performance does not always improve with the increase of the thread pool. When it exceeds a certain size, it increases the system burden. Too many threads will slow down the response time of the first submitted application, resulting in a decrease in the average response time. This shows that in a given hardware and software environment, there is an optimal thread pool parameter configuration in the system. In the system, the thread pool parameters are configured through files, so the thread pool parameters can be dynamically configured in the future to adapt to changes in hardware and software environments and application environments.

For the background resources corresponding to a network topology map, it includes clients, application servers, switches and routers, and the connections between them. To provide these resources to users, the corresponding virtual machines need to be allocated in the background. The devices connected to the switch should be in the same network, which enables these devices to communicate directly in the same local area network. For networks connected by routers, data packets should be isolated so that they can only communicate through routers. For the network between users, it should also achieve mutual noninterference.

4. Discussion

The experimental resources used by students are consistent with real experimental resources. The routers, application servers, etc. on the experimental panel that students see are real network devices. The network packets between them need to be real packets. This is a fundamental difference from the existing computer network simulation system. With this mode, users will not be affected by the system in the analysis of network packets like a simulated system. In the process of experimenting, users may encounter unexpected situations such as power outages in the middle, and the background system should be able to save the experiment site for users. In this way, the user can resume the experiment after the emergency is over. Moreover, when the user completes the experiment, this function allows the teacher to evaluate the student’s experimental results. To ensure the efficient use of resources, the system platform should run under high load conditions to provide more users to do more network experiments at the same time. At this time, the system administrator should monitor the host in real time. They adjust the allocation of system resources according to various parameters of the system. The goal of system monitoring is dynamic multilevel resource monitoring and is not just monitoring the running status of physical hosts, including many management objects.

For a given hardware host resource, if each virtual machine specifies parameters such as memory and network bandwidth, the host can support the maximum number of virtual machines running at the same time. To ensure the normal and smooth operation of the virtual machine, it is the problem of maximizing the resource allocation of the virtual machine. In this case, the cost can be saved and resources can be maximized. There are many types of virtual machine images, including client images, application server images, and router images. It is easier to test the limit on how many virtual machine images of a certain type can run on a host. But when there are many types of virtual machine images, it can be difficult to test the limited number of their combined operation. Therefore, how to deduce the dynamic combination ratio between multiple mirrors through the limit number of a certain mirror is the content to be studied in this section.

Network protocol simulation is the core part of computer network simulation. The ultimate purpose of computer network simulation is to provide an experimental platform. Only through the emulation of network protocols, this platform can be made available. As we all know, computer networks follow the OSI seven-layer model. Each layer has numerous protocols and protocol families. If it emulates various protocols from the physical or data link layer to the application layer, then the workload will be staggering, thus defeating the purpose of emulation. The purpose of simulation is to enable users to immerse themselves in the virtual experimental environment created by the computer system. It interacts with the virtual environment and can get a feeling similar to the actual physical participation, rather than depicting every detail of objective things intact.

5. Conclusion

The computer network virtual learning platform provides students with a learning environment for independent research and on-demand education by allocating more practical program resources and experimental environments to individual students, which improves the problem of lack of teaching materials in the learning environment. It is built on a student learning management platform. It is further developed through computer program management, thermocouples, results analysis, and other methods. Combined with the computer network course experiment, this study proposes the overall design and system architecture of the computer network experiment platform based on virtualization and lists the system realization goals. In the method proposed in this study, the fitting process is further simplified. This is conducive to improving the real-time performance of the entire signal processing algorithm and is also conducive to compressing product costs to improve market competitiveness.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This research study was sponsored by Hebei Province Higher Education Teaching Reform Research and Practice Project, Computer Professional Curriculum Education Teaching Method Innovation Research and Practice under Emerging Technology Paradigm (2020GJJG243); Postgraduate Education and Teaching Reform Research Project of Hebei University of Architecture, Innovation and Practice of Postgraduate Course Teaching Methods under Emerging Technology Paradigm (2020YJSJG08); and Hebei Province Course Ideological and Political Demonstration Course For Postgraduate Construction Project, Cloud Computing and Advanced Network Technology (YKCSZ2021162). The authors thank these projects for supporting this article.