Abstract

Research on wireless sensor network video surveillance system is a hot field of media classroom education and teaching. Based on the 5G communication network theory, this paper constructs a wireless sensor network video monitoring system for multimedia classroom education and teaching, systematically analyzes the new distance education model under the condition of the integration of the two networks, and improves the interactive live classroom education model. For the classroom teaching mode, the model studies the isochronous transmission technology of data collection and video encoding and decoding, adopts hardware compression encoding, integrates key technologies such as reliable multicast and conditional access, and proposes the design of the system scheme, which solves the problem of multimedia classroom education and teaching digital issues. During the simulation process, the MATLAB software platform was used to study the effect of the number of 5G communication network nodes and node attributes (such as node position, node perception angle, instantaneous perception direction, or direction difference between neighboring nodes, node speed, etc.) on the target area or target point. The experimental results show that the performance of the joint optimization of transmission video quality and network lifetime performs well under multipath conditions. The total utility under multipath finally converges to 8.3, while the single path finally converges to 6.5, which further promotes the real-time performance of the multimedia network classroom teaching system.

1. Introduction

The rapid development of wireless communication technology, microelectronic technology, and microelectromechanical technology has given birth to wireless sensor network. Wireless sensor networks include omnidirectional sensor networks (WSNs, whose sensing range is a circular area) and wireless multimedia (directed) sensor networks (WMSNs, whose sensing range is a sector-shaped area) [1]. At present, there are many researches on wireless sensor coverage, but most of them focus on the coverage research of omnidirectional sensing model, and the existing research on wireless multimedia sensor network coverage has considered the directionality and rotatability of nodes [24]. However, in the definition and calculation of “coverage,” only the influence of the instantaneous sensing range of the node is considered, and the influence of the rotation of the node on the “coverage” is not considered, so the existing research results have certain limitations for practical applications [5]. In conclusion, wireless multimedia sensor networks face many new challenges, and new solutions are urgently needed to guide the effective implementation of directed-aware network systems [6].

Network streaming media refers to the media formats played on the Internet by means of streaming transmission, such as video or multimedia files [7, 8]. It can be widely used in online news release, online live broadcast, online advertisement, distance education, real-time video conference, etc. Currently, the most direct application is online live broadcast and video-on-demand. Network teaching is more and more favored by people because of its rich information resources, friendly interactive performance, and excellent openness and has gradually developed into a relatively mature new education form [9]. Although the existing research on wireless multimedia sensor network coverage considers the directionality and rotatability of nodes, ignoring the potential coverage wireless sensor network delay domain brought by the rotatability of the node ignores the influence of the rotation speed of the node on the target coverage ratio [1012]. This poses a challenge to the amount of video multicast data in the wireless sensor network. If the wireless sensor network video multicast data transmission is provided based on the user bandwidth with a large access rate, then users with a small access rate cannot realize the video data. If the wireless sensor network video multicast data transmission is provided based on the user bandwidth with low access rate, users with high access rate will waste a lot of bandwidth, which will affect the effect of video playback. In order to solve this contradiction, this paper uses layered wireless sensor network video multicast to realize video transmission, which meets the needs of users with different network access bandwidths. In a word, directed sensor networks face many new challenges, and new solutions are urgently needed to guide the effective implementation of directed sensing network systems [13].

Based on the practical application of wireless multimedia sensor network, this paper considers the orientation and rotatability of the sensor, introduces the rotation of the sensor into the definition and calculation of “coverage,” introduces the concept of “delay,” and establishes the extension of multimedia classroom education and teaching, respectively. At the same time, the influence of the number of multimedia classroom education teaching nodes and node attributes (such as node position, node perception angle, instantaneous perception direction, or direction difference between neighboring nodes, node speed, etc.) is given. The optimal number of nodes is used to meet the coverage requirements; and the deployment plan of multimedia classroom education and teaching nodes is studied to determine the impact of node placement distance and node attributes on the coverage of the target area.

From the perspective of communication protocols and communication networks, although the streaming media technology itself is quite mature at present, there are still many problems in practical application, mainly because the main network currently carrying streaming media is IP network [14]. The general protocol of the current IP network is ipv4, which is characterized by a “best effort” network. The video network information source of the network is determined by the number of users and the transmission volume of users, which is unstable, and the playback stream is unstable. Therefore, the integration of various broadband digital systems such as the Internet and DVB has a broad material basis and mass basis [15]. The development of new information services in the broadband system provides new development opportunities for manufacturers, information providers, and broadband system operators, as well as a richer information culture and education environment for the general public.

SPAN is a connectivity maintenance protocol, which can shut down unnecessary nodes, so that all active nodes are connected through the communication hub and all inactive nodes are connected to at least one active node. Hong et al. [16] proposed that when an active node neither satisfies SPAN nor meets the wake-up rules of the CCP, it becomes a sleep state. A sleeping node will wake up periodically and enter the listening state. If it follows the SPAN or CCP rules, it will become active. Hossain et al. [17] designed a node scheduling method that can provide different monitoring qualities for different regions. The ability of sensors to monitor the intruder’s path depends on the intruder’s knowledge of the network. Mrabet et al. [18] believe that when the intruder does not know the exact location of the sensor and randomly selects a specific path from port 0 to port 1, the distance path is less than or equal to(node perception radius) in the tubular area. If there is at least one sensor in port 1, the intruder must be detected while traveling from 0.90 to port 1. The server-side network interface only sends data and does not have to process client requests or be congested by client data, so its bandwidth will not be overloaded by the increase of access users. Kochhar and Kumar [19] found that the work of handling large volume data loads is completely handed over to reliable multicast protocols and the end user’s network processing power and host processing power. The bottleneck of data service is directly transferred from the server to the network.

And in order to find such a path, the combination of Voronoi diagram and Delaunay angle is mostly used. In this coverage management, only the distance between the target and the sensor node is considered [20]. When using the cable TV network to carry out distance education work, the teaching programs (including real-time) are sent to the front-end system of the cable TV and then sent to thousands of households through the transmission and distribution network, and the relevant audio and video programs are placed in the front-end storage equipment. It is convenient for users to conduct video-on-demand and self-learning. Through the access to the Internet, it provides users with a broad learning space and puts relevant learning resources on the Internet, which is conducive to user-assisted learning [2123]. In the exposure coverage study, not only the distance of the target from the sensor node but also the exposure time of the target is considered, and the coverage rate increases with the increase of the allocated monitoring time (exposure) [24]; the researchers discussed how to introduce the duration that the target is detected by the sensor to find the minimum and maximum exposure paths. The main methods used are lattice theory, Voronoi diagram, etc. [25, 26].

3. The Teaching Structure of Multimedia Classroom Education under 5G Communication Network

3.1. 5G Communication Network Topology

In random deployment, the target area of 5G communication network is reachable, and nodes are randomly scattered in the target area. The number of scattered nodes directly affects the probability of the target area being covered. Too many nodes will generate a lot of redundancy. Monitoring data increases the workload of the base station; if the number of nodes is too small, the coverage requirements cannot be met. Whether the cooperation can continue depends on how to distribute among the participants. Distribution is the core of the cooperative game. The so-called distribution is a set of dimensional vectors of the game; each dimension of which represents the distribution benefits of the corresponding players. Except for signaling, the media stream is not directly sent to other participants, but the multimedia is sent to a public IP wireless sensor network video multicast address, and the IP wireless sensor network video multicast function is responsible for the multimedia streaming to other parties.

Sent by the real-time data receiver, the RR message provides the number of lost messages, the maximum sequence number of received messages, the jitter of arrival time, the time of receiving the last SR message, and the last received message for each wireless sensor network source information such as the delay of an SR packet. Sent by the active wireless sensor network source sender, the SR report not only provides the data reception quality feedback information of the end system as the receiver (the same as the RTCP RR message) but also provides the SSRC (synchronization source) identifier, NTP video network source, RTP time stamp, number of packets sent, number of bytes sent, and the information in Table 1.

If we send each symbol sequentially, that is, the symbols are sent in the order within the block, and the blocks are sent in the order of their positions within the ADU. Data is transmitted in this schedule.

It has the following characteristics: due to the continuity of symbol transmission, when continuous packet loss occurs, one or several consecutive blocks of symbols may be lost, which will result in the situation that the block cannot be recovered at the receiving end. Each member participating in the session periodically sends an RTCP packet, and each site can estimate or calculate the number of members participating in the session accordingly, so as to adjust the amount of real-time control information in time, so that the amount of control information and media traffic can reach a balance (in multimedia of a conference, the amount of control information is generally about 5% of the media traffic).

3.2. Multimedia Classroom Area Perception

In the long-distance multimedia classroom teaching activities, under normal circumstances, one teacher is lecturing and several students are listening. The teacher’s lecture process is transmitted to students in real time through network transmission to meet the needs of teaching activities. In the multimedia network classroom teaching system, the transmission of multimedia data is a point-to-multipoint transmission mode. The teaching content produced by the multimedia network classroom is sent out in real time through the live server.

To ensure the reliability of transmission, the RF should be as high as possible. However, if the RF is too high, the receiver will receive too much redundant information and reduce the efficiency of the receiver when enough symbols are received to restore the original block.

Symbols received after that belong to the block will be discarded as useless packets. We can find a suitable RF value according to the packet loss rate of the media network. Let RL be the network packet loss rate, at the receiving end, in order to receive enough symbols in one round of transmission to recover the block.

The reliable transport layer in Figure 1 of the sender first receives the data unit (ADU) from the application layer and submits it to the FEC module for segmentation (data disassembling, dividing the ADU into multiple data blocks suitable for encoding) and redundancy coding. In order to increase the reliability of transmission and improve the efficiency of reception, before sending data, we also need to schedule the sending sequence of data (data scheduling) and send it out at a specified rate according to the scheduled sequence. At the receiving end, the received data is subjected to FEC decoding and assembly (data assembling), and finally, the original data block, namely, ADU, is recovered and submitted to the application layer for processing. The reliable transmission layer mainly includes two modules: FEC module and data scheduling.

3.3. Random Distribution of Classroom Education and Teaching

The classroom education and teaching tracking system obtains the position parameters of the camera and transmits it to the graphics workstation, and the graphics workstation uses this to generate different parts of the multimedia network scene or different multimedia network scenes, so as to achieve the effect of matching the live video information and the multimedia network scene. The routing problem of routing nodes is only considered from the perspective of maximizing the benefits of nodes and paths. Since nodes with large benefits are more likely to be selected as routing nodes, they are more likely to fail due to energy exhaustion, resulting in poor network connectivity. Considering the energy distribution of network nodes, the node degree and formula are combined to redefine the node revenue function as follows. All control layer code is kept in the grails/app/controller directory.

At the user’s receiving end, a middle layer is inserted between the TCP/IP protocol stack and the NDIS (Network Driver Interface Specification) interface. Conditional access to the underlying data packets is completed in this intermediate driver layer. NDIS is the network interface specification, and Windows uses the NDIS function library to implement the NDIS interface. The front-end service scheduling unit optimizes and schedules resources to form a push schedule, and the broadcast control unit forms the current task list of each sending device according to the push schedule and forms classroom navigation information; the navigation information is transmitted to the navigation information transmitter, which sends that the server sends it to the client, and then, the client can obtain the live classroom service information according to the information, so as to receive the service on the corresponding time and information channel.

Classroom information navigation includes a set of navigation information for live classes, each navigation information corresponds to a live class teaching process, or each navigation information represents a live class session and describes the service information of each live classroom.

Because the transmission of video data in Figure 2 is realized by RTP/RTCP protocol and the real-time data transmission protocol RTP/RTCP is realized based on UDP, the paths of multimedia data packets from the teacher’s end to the student’s end may be different, and the time taken may be different. In this way, some data will arrive early, and some data will arrive late. Due to the real-time requirements of the multimedia network classroom teaching system, late datagrams need to be discarded. The packet loss processing module is responsible for judging whether the data packet is late. If it is late, the data packet is discarded, and the corresponding error concealment strategy is implemented to make up for the short pause caused by the discarding of the data packet.

3.4. Delay Analysis of Wireless Sensor Networks

The video data on the teacher side is encoded in layers, and the video transmission module sends it to the wireless sensor network video multicast backbone network through different wireless sensor network video multicast addresses. The transmission scheduling strategy selects one or more of the valid paths to send the multimedia data; if there is no valid path information saved to the destination node in the routing table of the source node, the source node finds all neighbor nodes through the neighbor discovery algorithm. In order to solve the problem of delay jitter, a video network source is designed. Since there may be multiple threads using a video network source at the same time, it is necessary to perform a mutual exclusion operation on the video network source.

During data synthesis, video network source synchronization technology is used to synchronize the video of the teaching site well with the teacher’s operation on the teaching computer or the replacement of electronic handouts. The video network source synchronization technology is used to add a unified video network source or time code to each two different media data stream units, and the information units with the same video network source will be displayed at the same time. When sending, each media is divided into units in chronological order, and each unit is marked with a time stamp on the same time axis, and each media unit in the same time stamp has the same video network source. After each media arrives at the terminal, the media units with the same video network information source are displayed at the same time, thus obtaining the effect of synchronization between media as shown in Figure 3.

The design of the teacher terminal mainly includes video information collection, coding, and wireless sensor network video multicast transmission.

Video data are sent to the wireless sensor network video multicast backbone network through different wireless sensor network video multicast addresses. While the video information collection module realizes video information collection, it can adjust the video, including color adjustment and video size adjustment. When the trade-off factor t increases, better transmission video quality can be obtained, but the network lifetime is relatively reduced; when the trade-off factor decreases, the result is just the opposite. At the same time, the simulation results also show that, compared with the single-path case, the transmission video quality and network lifetime are significantly improved in the multipath case.

4. Construction of Wireless Sensor Network Video Monitoring System for 5G Communication Network

4.1. 5G Communication Network Class-Aware Deployment

The video data receiving module selects and receives the data of the corresponding wireless sensor network video multicast group according to the current 5G communication network condition. At the same time, the video data receiving module is responsible for dealing with the problem of packet loss, sorting, and delay jitter and is responsible for the synchronization between data of different multicast groups. After the video receiving module restores the time sequence of the video data generated at the sending end, the DirectShow technology is used to realize the playback of the video data.

If we send each symbol sequentially, that is, the symbols are sent in the order within the block, and the blocks are sent in the order of their positions within the ADU. Data is transmitted in this schedule. It has the following characteristics: due to the continuity of symbol transmission, when continuous packet loss occurs, the symbol of one or several blocks may be lost, which will cause the situation that the block cannot be recovered at the receiving end, affecting the data transmission reliability.

The 5G communication network acquisition machine is equipped with encoding software for real-time acquisition of audio and video information. The teacher terminal is mainly responsible for capturing the screen of the teacher’s computer. By installing the teacher’s terminal software on the teacher’s teaching machine, it collects the teacher’s courseware, compresses the captured teacher’s screen, and uploads it to the audio and video collection terminal. The audio and video collection terminal collects the audio and video information of the camera through the capture card, then mixes and compresses it with the screen data transmitted from the teacher terminal, saves it as a WMV file, and broadcasts the classroom information live. The forwarding server is a streaming media server installed with Media Service, which can establish the publishing point in Table 2 to forward streaming media for a large number of users to watch.

Since the Windows Media SDK does not have the capture function the audio and video capture are developed by the DirectShow SDK, the capture filter in the DirectShow link uses the capture filter provided by the system, and transform filter uses the Windows Media 9 encoder provided by Microsoft to compress the data. During the collection, the desktop image of the teacher side is screen-captured every 5 seconds and then sent to the collection machine as a picture stream and inserted into the audio and video stream. Since the life of the node whose energy is first exhausted in the network is taken as the life of the network in this paper, maximizing the life of the network is to maximize the life of the node with the smallest life in the network, and the life of the wireless sensor network can be expressed as the following.

4.2. Analysis of Teaching Control in Multimedia Classroom Education

The video acquisition module uses the function that comes with the multimedia classroom education and teaching system to set the video acquisition parameters (compression type, sampling frequency, single-point bit depth, number of channels, etc.). After that, call the function from this function entry, and pass the collected data as parameters to the function. The B-type perception model is established as follows: represented by a five-tuple (), where represents the multimedia node position coordinates; represents the sensing radius; square is the sensing direction of the node, which can take any value (0-360), and the node can be rotated after being placed; represents the viewing angle offset of the multimedia node, that is, the general viewing angle; and represents the delay, that is, the time it takes for the node to perceive the entire possible perception area by rotating.

When the number of nodes is small, the rotation speed of the nodes can be adjusted by the execution of the algorithm, which can meet the requirements of a specific unit time coverage; when the number of nodes is large, the execution of the algorithm in Figure 4 can close a large number of redundant nodes.

In terms of real-time interaction, the DirectShow SDK method was first used for programming, but when a program developed with Show SDK uses the system’s code filter, the client may not necessarily have the required filter due to the version and client installation selection. Usually, the steps of studying network optimization problems can be divided into problem formulation, optimization model establishment, optimization model solving to obtain the optimal solution, and simulation verification of the performance of the solution algorithm.

Moreover, DirectShow SDK belongs to component programming, and it is suitable for porting the application program to the webpage ActiveX plug-in. ActiveX itself is also component programming; the class factory of both parties will cause the program to exit abnormally due to the conflict, so they have to give up the DirectShow SDK development and adopt other development methods.

4.3. WSN Target Tracking Node

The data receiving end receives all data from the same video network source. For audio and video data, a video network source with an adaptive size is used. If the network is smooth (almost no packet loss), reduce the number of video network sources. When an active node needs to send multimedia data with QoS guarantee requirements to the destination node, the source node first queries its own routing table. If the routing table contains valid QoS routing information to the destination node, the source node can do it. Neighbor nodes are colored in different colors to represent different paths and turn them into up-hop nodes. For example, the maximum number of video network sources that can be increased is video 15 and video 30 (if the video frame rate is 30 frames, s, and the video frame rate is 30 W/s), the maximum video delay is seconds, and the maximum video delay is  second. The maximum audio and video asynchronous time difference is .

The CPU of the server is Intel i3 processor, 4 G memory, and the required operating system is Ubuntu Server, which is configured with FreeSwitch, Red5, Tomcat, and other related server software. For computer clients, users can freely choose the video resolution, while Android tablet users will automatically turn on different resolutions according to the quality of the network. The average bandwidth occupied by each video is 40 Kbyte/s; the bandwidth occupied by each video is about 20 Kbyte/s, and other applications basically do not occupy the bandwidth. In the actual classroom, only the teacher opens the video, and at the same time, the system only allows one person to speak. In this way, when the number of participants in the classroom is 30, the upstream bandwidth of the server is  Kbyte/s, which is 13.920 Mbit/s; for a 150 M router, it can support 10 classes to use at the same time.

In the wireless multimedia sensor network, the deterministic coverage of the network is different from that of the wireless omnidirectional sensor network due to the directionality and rotatability of the nodes in Figure 5. The main goal of the research on the deterministic deployment of wireless multimedia sensor networks is to use as few nodes as possible to achieve the highest possible coverage. It is decomposed into a series of simple and easy-to-solve subproblems, and then, each subproblem is solved independently by using the gradient (subgradient) algorithm. Finally, the high-level main problem realizes the coordinated scheduling between these subproblems through certain formal parameter transfer.

5. Application and Analysis of Wireless Sensor Network Video Surveillance System for 5G Communication Network

5.1. 5G Communication Network Device Coverage Data

In the 5G communication network deployment mode, if the included angle is exactly divisible by 7, then there is no overlap when combined into a circle, otherwise, an overlap of coverage will be formed due to the redundant angle when the circle is combined. These coverage overlaps are averaged to each node, thereby simplifying the deployment problem of the entire target area in order to find a deployment method that makes the node.

By introducing the wireless sensor network delay factor to balance the relative importance of the two, the mapping parameter ensures that the two are in the same order of magnitude. In the constraints of the model, the interference between links, the constraints of link capacity, and the constraints of node energy consumption are considered. Then, the original problem is decomposed by Lagrangian duality and solved by the improved dual subgradient algorithm.

The system uses LDAP (Lightweight Directory Access Protocol) to manage user resources in a unified manner. By improving the authorization mechanism and access control in Figure 6, the system can control user access rights to the field level to ensure that users can only access applications that have rights to system and related information. Therefore, the original optimization problem can be decomposed into several subproblems at the low level, and the dual decomposition main problem responsible for updating the dual variables can be obtained at the high level. In fact, it is to transform the solution of the original optimization problem into the solution of its dual problem.

FEC is the encoding of loss recovery. The original data is encoded through the mathematical operation relationship in Figure 7, and some encoded data related to the original data are generated. These data can be recovered without receiving all the original data. The lost original data block is output. At the receiving end, even if packet loss occurs during the receiving process, as long as any of the symbols are received, the original block can be recovered through the corresponding FEC decoding algorithm.

5.2. Simulation of WSN for Multimedia Classroom Education and Teaching

This chapter simulates the control algorithm of the target monitoring node. In order to simplify the experiment, it is assumed that the sensor is isomorphic, and the sensing radius of the sensor node is 60 m, the angle of the sensing sector is , the specific area is , and the number of nodes is . After the target enters the monitoring area randomly, it moves randomly in the area until it leaves the area. The execution results of the algorithm are shown in paper, where the boundary coverage rate is % and shows the wake-up situation of sensor nodes that move randomly and leave from point after the target enters the area from point randomly, the circle area represents the possible sensing area of the wake-up node, the curve is the target movement trajectory, and the target trajectory covers that the rate is 100%, and the wake-up node is the least, so the target enters the monitoring area and moves randomly and finally leaves the area to be monitored with a probability of %.

In design, students, teachers, and school administrators all inherit from the user class. Except for the basic information, other information in Figure 8 is stored in the UserDetail class. The class meeting is associated with the corresponding class, so that each student and teacher belonging to the class can see the corresponding class information. Windowless controls speed up the display of the application and can contain transparent and nonrectangular controls. When the node load on a certain path is too large, the visual sensor node can preferentially select other paths with less load for data transmission, thus balancing the transmission load in the network and making the energy loss of the visual sensor nodes in the entire wireless sensor network tend to be higher, extending the life of the entire network.

5.3. Example Application and Analysis

The congestion control of wireless sensor network transmission is realized by sending data packets on multiple ALC wireless sensor network video multicast addresses, and the data packets belonging to each transmission object are multicast to four wireless sensor network video multicast addresses at the sending end. The addresses are sent at different rates, and the receiver may join several wireless sensor network video multicast groups in the four wireless sensor network video multicast groups at the same time and receive data packets. The sender adds congestion control signals to the transmitted data packets, thereby instructing the receiver how to change the receiving behavior according to the network condition rate to adjust the receive rate.

In the control and management of target monitoring nodes, we hope to use as few nodes as possible to complete the monitoring task; that is, we hope that the target trajectory monitored by each node in Figure 9 is as long as possible. Therefore, in the possible growth area of the monitored trajectory, we hope that the next sensing node is as far away from the junction as possible, so that the length of the uncovered target trajectory is the longest, and there is no coverage. In addition, the load balancing can be adjusted by the video stream transmission in the network, so that the data traffic on the congested wireless link can be transferred to a relatively idle wireless link for transmission, thereby reducing the transmission delay of the video stream and ensuring the wireless connection. Real-time nature of video streaming in sensor networks.

This system adopts SIP of these two transmission protocols at the same time. For the call control in Figure 10, the SIP request based on TCP is used, and the transmission control of the data flow is SIP based on UDP. It can be seen from the description of a response of SIP, in the description line of Via, as a call request, this line indicates that the underlying transport protocol of SIP is TCP. When the sensing angle of the nodes is less than 1.2566 radians, the average overlapping wireless sensor network delay domain of the nodes in the tiled mode is small, and the complete coverage can be achieved with fewer nodes, so the tiled deployment mode should be selected. The delay field is small, and the circular deployment method should be selected; when the sensing angle of the nodes is equal to 60 degrees and 72 degrees, the nodes of the two deployment methods overlap the wireless sensor network with the same delay field on average. The number of nodes used for the overlay is the same. By adding a multipath transmission scheme, the model solves the problems of early energy exhaustion of some visual sensor nodes in single-path transmission, and some wireless links are congested while other links are idle, so that the network transmission load is balanced, the network life is shortened, and the transmission delay is reduced.

6. Conclusion

In terms of 5G communication network transmission, according to the characteristics of the “one-to-many” teaching mode of the multimedia network classroom teaching system, this paper adopts the wireless sensor network video multicast technology to realize the transmission of multimedia classroom teaching video data. Compared with unicast and broadcast, wireless sensor network video multicast can greatly reduce the load of backbone network when realizing one-to-many transmission, and this advantage is more obvious in applications with relatively large amount of transmitted data. Therefore, the wireless sensor network video multicast technology is particularly suitable for the transmission of multimedia data such as video. In the application layer, the RTP/RTCP protocol is used for multimedia data transmission and video network source control. For the delayed and nondelay sensing network, the network monitoring capability is analyzed, and the density of sensors in the network (the number of sensors per unit of wireless sensor network delay domain), the path length of the target passing through the network, and the node sensing radius all perceived the influence of direction and node rotation speed on path coverage, so that when the target traverses the network, the staff can be guided to deploy corresponding nodes to achieve the expected path coverage. The simulation results not only verify the convergence performance of the solution algorithm but also show that the two objective utility functions of transmission video quality and network lifetime change with the change of the wireless sensor network delay factor. When the wireless sensor network delay factor increases, better transmission video quality can be obtained, but the network lifetime is relatively reduced; when the wireless sensor network delay factor is reduced, the result is just the opposite.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by School-Level Scientific Research Platform of Suzhou University (2021xjpt51), Provincial Industrial College (2021cyxy069), Key project of Natural Science Research of Anhui Provincial Department of Education (KJ2021A1110), School level Industrial College of Suzhou University (szxy2021cxxy04), and New Engineering Pilot Project (szxy2018xgk05).