Abstract

Transmission line detection is a magic weapon to protect power safety, and it plays an irreplaceable role in stabilizing electricity use. However, the detection of transmission lines contains complex objects and rich content, which puts forward a new test for the accurate identification and power consumption of the monitoring. The advantages of Zigbee wireless communication technology are low energy consumption, simple operation, and strong cost control. It is a key to cracking the high energy consumption of transmission line detection. Edge computing is a supplement to cloud computing. It has the characteristics of low bandwidth and low latency, which is conducive to cracking the complex scenarios of wireless monitoring. This article was aimed at studying and designing a monitoring system for transmission lines to ensure the orderly operation of remote monitoring of transmission lines. This paper proposes to use Zigbee wireless communication as the core and integrate edge computing to construct a wireless transmission line sensing system to form a remote transmission line detection system. This paper builds a detection system, systematically tested the system’s energy consumption, anti-interference, fault monitoring recall, and system accuracy, and evaluated the overall operation of the system as a whole. The experimental results in this paper show that the monitoring and attraction of transmission lines can effectively reduce the risk of circuit failure by 20%.

1. Introduction

In recent years, with the rapid growth of society’s demand for electricity, the power supply load pressure of the power system has also increased rapidly. As a key transmission equipment, transmission lines have a wide coverage. High-voltage transmission lines are greatly affected by air pollution, climatic conditions, and other environmental factors. Conductor movement, insulation pollution flashover, conductor icing, wind skew flashover, etc., occur from time to time, often causing arcing, property and insulation damage, conductor burnout, disconnection, tower falling, and other crises. It causes serious economic losses and seriously threatens the safe operation of high-voltage transmission lines. Solving the problems of insufficient transmission capacity during peak hours and failure of some transmission lines is a major challenge facing the Ministry of Energy Management. The traditional manual inspection method cannot guarantee the accuracy of the results and is labor-intensive and material-intensive. It cannot provide real-time online monitoring and cannot detect the safety hazards of high-voltage transmission lines in time, and its efficiency is not high. The rapid development of information technology provides practical technical conditions for remote monitoring and analysis of environmental elements and transmission line operating conditions. Through the sensor technology, network technology, and software development technology in the power supply system, the remote monitoring of the status of the transmission line is realized, which provides a decision-making basis for daily maintenance and line safety.

This paper designs and constructs an online transmission line monitoring system; the advantages are as follows: (1) this article makes full use of the advantages of information computing to collect various data information, fully grasp the various conditions of the transmission line, and find the cause of the line failure in time. (2) Implementation of the whole process of monitoring, it is at the source to troubleshoot hidden dangers of line faults. (3) With the help of wireless communication technology, obstacles can be quickly reported to relevant personnel for processing to ensure the accuracy of detection.

Experts and scholars at home and abroad have achieved certain results in the construction of transmission line detection systems. Kabalci Y. and Kabalci E use the solar microgrid model to explore local renewable energy demand. The total length of the transmission line is 25 kilometers, and it constructs the actual parameter model of the line at the output of the inverter. Binary Phase Shift Key (BPSK) modems in different locations manage the communication infrastructure (PLC) of the power line. The cost of this scheme is extremely low. The power line transmits charging power while generating voltage, so the monitoring cost is basically zero [1]. Zhang et al. builds a visual analysis system for transmission line icing monitoring. It is based on a two-dimensional map with a customizable map layer and combines the temperature and humidity distribution of the map for data monitoring. In addition, the system also combines the ice thickness prediction system with the prediction algorithm of the hybrid deep trust network to optimize the accuracy of data monitoring. Once the icing thickness exceeds the threshold, appropriate deicing measures can be selected based on historical data analysis. The data proves that the system can reflect the statistical characteristics of the icing monitoring data, and the prediction accuracy of the icing thickness is relatively high [2]. In order to solve the multiobjective combined decision-making problem, Jiang et al. proposed an improved binary particle swarm optimization (mbpso) strategy. The mbpso algorithm is solved based on hourly weather data, and the number of sensors required by this scheme can be minimized. A section of 161 kV line is selected for testing. The results show that the sensor can effectively monitor 89.6% of conductor high temperature events, and the root mean square error of reconstructed conductor temperature distribution is less than 0.8 [3]. Reddy et al. study the damage of the transmission line. Overhead distribution lines can be regarded as a tributary of the distribution system, traversing difficult terrain under unfavorable weather conditions, so the insulation materials of the distribution lines are easily damaged. The damaged insulation experience will cause electric field distortion around the insulation, leakage current, other damage to the insulation surface, and eventually ground faults. Electric power companies have adopted a variety of computer-aided systems to enable the Distribution Control Center (CDC) to continuously monitor the distribution system. These systems usually use image processing and other technologies to conduct electrical and physical tests on distribution lines [4]. Wu et al. discovered that the energy loss caused by dust deposition on the ground is closely related to the disconnection and damage of the photovoltaic power supply of the transmission line monitoring device. Dust can lead to incorrect power configuration, undercharging the battery or charging for a long time, thereby shortening the service life of the device. They constructed a model of dust and electric wires, qualitatively determined the attenuation coefficient related to dust deposition, and determined the best configuration of electric wires. This research can optimize the power supply design and save replacement costs [5]. Tripathi and De provide a new data-driven framework to reduce the bandwidth required to transmit phasor measurement unit (PMU) data. The performance of the proposed algorithm is evaluated by large-scale simulation of power line frequency data. A trade-off between the predictive quality of the algorithm and the execution time has been observed, which can be resolved by choosing appropriate hyperparameters. Compared with competing data reduction systems, professional algorithms save about 60% of bandwidth and more accurately identify 73% of power system interference [6, 7]. Fabricio et al. introduced the industrial electrical equipment monitoring system of the production line, which was aimed at monitoring its working status in real time, realizing machine management, and quickly finding deviations and faults. The system monitors the actual current consumption of the device with the help of sensors connected to the data concentrator module. The data is stored in the data concentrator module and then transmitted to the IoT platform for processing. When the data is abnormal, the system will report an error and send a reminder signal to the manager. Tuttelberg K’s research analyzes the uncertainty of transmission line monitoring applications based on synchronized phasor measurement. He analyzes the spread of uncertainty to assess the confidence in the correctness of the monitoring application and explains how measurement errors in the system affect the monitoring. The study shows how the uncertainty in voltage and current measurements is propagated in the estimation, and what accuracy can be obtained from this monitoring application in the general configuration of the transmission system. The research also provides insight into the reasons for most of the uncertainty (and inaccuracy) of a given amount of interest [8, 9].

3. Edge Computing and Zigbee Wireless Communication Technology

3.1. Edge Computing

Edge computing is an open platform that measures the edge of the network close to the object or data source and integrates basic network, computing, storage, and application capabilities. And it provides edge intelligent services nearby to meet the needs of fact transmission, agile connection, application intelligence, security and privacy, and data optimization [10]. It can be used as a bridge connecting the physical world and the digital world and can provide smart systems, smart gateways, smart assets, and smart services.

Figure 1 is a model of edge computing. In this model, the cloud data center collects data from the database and some edge devices. These edge nodes can not only request services from the cloud computing center but also handle some computing tasks. Therefore, we need to conduct further in-depth research on the hardware platform and software technology related to edge computing to ensure that the security and reliability of the data in the edge environment can be met.

The basic idea of edge computing is to build a virtual platform that combines network, computing, storage, and application capabilities on the edge of the network close to the data source to achieve extremely low latency and location sensitivity of edge services [11]. According to the idea of edge computing, as shown in Figure 2, the edge computing architecture can be represented by the following four levels:

3.1.1. Infrastructure Layer

Infrastructure provides access to the basic network, cloud computing services, and management of equipment deployed at the edge of the network.

3.1.2. Edge Data Center Layer

The edge data center or edge node is one of the core infrastructures of edge computing and consists of edge servers. Edge data centers can not only collaborate with each other but also connect to the cloud. Multiple edge data centers are deployed at the edge of the network. It is deployed by edge users or infrastructure providers to establish a multitenant virtualization infrastructure to support virtualization and management services.

3.1.3. Edge Network Layer

The edge network should be composed of edge network equipment such as edge set-top boxes, routers, wireless access points, and gateways, and it is a bridge between edge users and edge data centers. Edge network computing provides interconnection between infrastructure, edge data centers, and terminal devices through mobile networks, wireless networks, and core networks.

3.1.4. Edge Terminal Equipment Layer

Edge terminal equipment has the dual identity of data producer and consumer. It is connected to the edge data center through edge network equipment and benefits from the services provided by edge computing. It is mainly composed of various devices connected to the edge network.

Based on the architecture of edge computing, edge computing has the following basic characteristics [12]: (1)Support mobile services: edge network equipment can link different mobile terminals, so it needs to take into account both data transmission technology and mobile network support technology(2)Fast transmission: edge computing simplifies the network structure, which not only supports online decision-making but also has extremely fast transmission speeds. Compared with cloud computing, edge computing is more suitable for applications with low latency and high real-time requirements, such as health emergency and forest fire warning(3)There are many base stations. Because edge computing terminals must be closely connected to data centers, and terminal equipment is spread all over the country, edge servers need to build data centers in various places

3.2. Time Delay Theory

Edge computing service latency refers to the round-trip time from sending an application request to receiving a response, which mainly includes transmission time, propagation time, queuing time, waiting time, and processing time at the application layer [13]. In edge computing, user application requests can be processed by local or remote data centers. Since the queuing processing delay of the intermediate router/switch is much lower than that of the target data center, the queuing processing delay is mainly the delay of the data center application layer. Figure 3 shows the composition of the delay in the local and remote scenarios. UE refers to the user terminal, and UE-AP refers to the entire process of transmitting the user terminal to the application program. As mentioned, all application requests from the UE will first arrive at the edge data center. In this hop, each request will experience some transmission delay to the UE and some propagation delay on the UE-AP connection. Figure 3(a) is a local processing scenario. AP requests are processed directly in the local data center; that is, queue waiting time is generated and processed in the local data center. If the local data center cannot meet the demand, the request is redirected to the remote data center, as shown in Figure 3(b). In this case, data transmission between data centers will cause additional propagation delays, while remote data centers will experience application layer queuing and processing delays.

VM refers to a virtual machine, which refers to a complete computer system with complete hardware system functions simulated by software and running in a completely isolated environment. The work that can be done in the physical computer can be realized in the virtual machine. Since the UE and UE-AP connection is not affected by VM deployment and workload distribution, the delay caused by this part is not considered. Therefore, the work focuses on the two components of delay: the propagation delay of the connection between data centers and the queuing and processing delay of the local or remote data center. These parts can be optimized through proper VM deployment and workload distribution.

3.2.1. Propagation Delay

Depending on the location of the data center, the connection between data centers may go through the optical access network, the convergence area network, or even the transmission network, so the delay between the center data can be very diverse. Let represent a group of data centers in the airborne computing system. The propagation delay between the data center and is determined by the network distance between and , denoted as . When a request is redirected from data center to data center , it will experience propagation delay between data centers.

3.2.2. Queuing and Processing Time

The queuing and processing time of the data center is determined by the workload and the capacity of the allocated VM service resources. In order to estimate the processing and queuing time on the VM, this section models the VM service system as an M/M/1 queue. The M/M/1 queuing model is a single-server queuing model that can be used to simulate the operation of many systems. For example, a post office with only one employee has only a queue of guests coming in, queuing, receiving services, and leaving. If the number of guests coming in conforms to the Poisson process and the service time is exponentially distributed, M/M/1 can be used to simulate and calculate the average queue length, the probability of different waiting times, etc. The processing time is calculated as follows:

where is the flow of a given workload and is the VM capacity allocated to .

3.3. The Impact of VM Deployment and Load Balancing on the Delay of All Parties

When planning service resources for new applications, edge computing operators should first deploy virtual machines and allocate virtual machine capacity to a given business flow while respecting the machine’s differential delay requirements. For each flow, two decisions must be made: (1) Select the target data center; the service capacity must be allocated from the corresponding virtual machines in the target data center. In terms of virtual machine deployment, the hardware capacity of each data center is limited, which limits the number of virtual machines that a data center can carry. In terms of load balancing, the more VM service capacity allocated to a stream, the less processing and queuing time will be. Therefore, the VM capacity allocated to the stream depends on the time available for application-level queuing and processing. In order to provide greater flexibility in workload distribution, this section proposes a flow distribution mechanism [14]; that is, a flow can be divided into multiple subflows, and each subflow can be served in a different data center. According to two scenarios, each substream can be a small stream or a large stream: 1) When a sub-stream requires less service capacity than a single VM, it can be allocated to a shared VM, and other sub-streams, as shown in Figure 3(a). In this case, the substream is called a small stream. represents the workload of a small application flow from data center to data center . Assuming that the VM capacity allocated to is , the average waiting time for queuing and processing can be expressed as

(2) If a subflow requires more service capacity than a single VM, this subflow should be allocated to multiple VMs, as shown in Figure 3(b), which is called a large flow. represents the workload of a large amount of traffic from application from data center to data center . Assuming that is evenly allocated to VMs in the target data center, the average queue and processing time of can be expressed as

where is the total service capacity of the VM corresponding to the application .

In summary, the queuing and processing time of each substream (small or large) can be calculated according to the formula. The overall service delay of each substream should also consider the propagation delay between the source data center and the target data center. Therefore, the total lead time experienced by the business flow can be expressed as

This means that the stream is assigned to a VM in the source data center for local processing.

In an edge computing system, there may be several possible solutions to deploy VMs and distribute workloads to meet all needs [15]. MILP is the mixed integer linear programming model, which is an extension of the linear programming model. When some decision variables in the linear programming problem are required to be limited to integers, then the programming problem at this time is a mixed integer linear programming problem. In other words, optimization problems not only have conditional constraints but also integer constraints. The article developed a MILP model [16] to determine the number of VMs deployed for each application in each data center and allocate flows to minimize the hardware consumption required to deploy VMs. That is, the goal is to minimize the consumption of hardware resources, expressed as follows:

The constraints are as follows:

where is the total amount of hardware resources in data center and . represents the average service delay threshold for APP c, . is an integer and represents the number of VMs deployed for APP c in the data center z. is binary, an identifier that indicates whether is 0.

3.4. Zigbee Wireless Communication

The name of Zigbee technology is derived from the group communication network composed of the bee character dance, which is a wireless communication network with low energy consumption and low transmission rate [17]. Table 1 is a comparison between Zigbee and several communication technologies.

The advantages and characteristics of Zigbee technology are described as follows [16]: (1)High security: Zigbee technology provides a special security system-data integrity checking and authentication. The encryption algorithm uses 128-bit Advanced Encryption Standard (AES) to make communication more secure and reliable(2)Large network capacity: a Zigbee network can have up to 254 child nodes to apply to join the network, and there can be up to 100 Zigbee networks in an area at the same time(3)The delay is very short: the sleep wake-up delay and the device channel access delay are only 15 ms, and the device search delay is only 30 ms(4)High reliability: the media access control layer adopts a distributed access mechanism based on carrier detection and conflict avoidance, which helps to avoid competition and conflict in data transmission. The node module has the function of automatic networking, and all modules automatically connect to the network when powered on. The data collected by the terminal nodes are transmitted through the entire network to ensure reliable transmission of information(5)Energy-saving and power-saving: Zigbee can run for about one year with only two low-power batteries

The Zigbee protocol is based on IEEE.802.15.4. IEEE.802.15.4 provides the physical layer and media access layer (MAC) to Zigbee, and Zigbee implements the network layer and application layer on it [17]. At each layer of the service entity, there is a service access point (SAP). SAP can be used as an interface between upper layers. The control of computer processes is usually done by primitives. The so-called primitive generally refers to a program segment composed of several instructions, used to realize a certain function, and cannot be interrupted during execution. Zigbee uses primitives to achieve SAP control between layers, so that SAP can play a role in coordination between layers. The flow chart of the Zigbee protocol communication model is shown in Figure 4.

The physical layer defines three carrier frequency bands for receiving and transmitting data. There are a total of 27 channels in these three frequency bands, including 1 in the 868-868.6 MHz frequency band, 10 in the 902-928 MHz frequency band, and 16 in the 2.4-24835 GHz frequency band. According to the different carrier frequency bands, the center frequencies of the 27 channels are as follows:

where is the channel number and is the center frequency corresponding to the channel.

The physical layer not only defines the mechanical and electrical characteristics required for wireless connections but also defines a physical layer management entity. The physical layer mainly provides the following services: energy detection; activation of the radio frequency transceiver; channel selection and data reception and transmission; evaluation of inactive channels; and indication of link quality. In addition to the different physical channels, the transmission rates and modulation methods of the three frequency bands are also different. The comparison of the three frequency bands is shown in Table 2.

The media access control layer is the basic layer of the Zigbee protocol and the intermediate layer between the physical layer and the network layer. Its role is to define the mechanism of the wireless channel and control the access status of the device. Without the MAC layer protocol, once the number of nodes in the network increases, data may not be transmitted normally due to signal conflicts.

The network layer (NWK) is an irreplaceable layer in the entire Zigbee protocol. The NWK layer not only receives data from the MAC layer and APL layer but also has management functions. The function of NWK includes the establishment of the network structure of different devices and the management of devices joining and leaving the network. Not only that, it also provides information backup and information data security services. The main role of the network layer is to connect and disconnect devices (Figure 5).

The network layer (NWK) is a very important layer in the entire Zigbee protocol. The NWK layer is not only the recipient of data but also has management functions. The network layer is mainly responsible for connecting and disconnecting devices. It manages the entry and exit of devices on the network by establishing a network structure and also provides information backup and data security services (Figure 6).

The application layer of Zigbee is implemented by the user. Users write application programs at the application layer to perform specific functions of application objects. For example, the data collection plan for each sensor in the system is written at this level.

3.5. Zigbee Topology

Zigbee wireless communication technology has three main types of self-organizing wireless network: star structure, mesh structure, and tree cluster structure.

A Zigbee coordinator and several terminal nodes form a Zigbee star structure, as shown in Figure 7. The coordinator is located in the center of the network and is a fully functional node responsible for establishing and maintaining the network. Terminal nodes can be full-function or semifunction nodes, distributed in the network established by the coordinator and directly communicate with it. The star topology has the advantages of small scale, simple structure, low equipment cost, and easy management. The disadvantage is that the routing flexibility of the network is low.

The topological structure of the Zigbee network is shown in Figure 8. It consists of several fully functional nodes, which are connected together to form a backbone network. Each node in the backbone network can be connected to full-function nodes and semifunction nodes to form its own subnet. The network topology is a multihop system, that is, a multihop relay system. Compared with star topology, net topology has greater transmission distance, greater security, self-organization function, and greater environmental adaptability.

The topology of the Zigbee tree cluster is shown in Figure 9. The tree cluster topology is a hybrid structure, which not only has the advantages of simple star array and low power consumption but also has the self-healing property of long transmission distance in mesh topology. Self-healing refers to the ability to quickly recover to its original state. For the mesh topology, there are generally two or more communication paths between any two node switches in the communication subnet. In this way, when one path fails, the information can also be sent to the node switch through another path. Its disadvantage is the complexity of the network. Once the network coverage becomes larger, the network transmission delay will become larger, which has a great influence on the time synchronization in the network.

4. Experiment and Analysis

4.1. Experimental Design

According to the specific requirements of the remote wireless monitoring system of the transmission line, real-time and accurate acquisition of line light intensity, temperature and humidity, wind, conductor temperature, and other elements realizes remote monitoring. The transmission layer of the remote wireless monitoring system design is composed of Zigbee wireless communication technology and edge computing technology, as shown in Figure 10.

First, the Zigbee node network communication platform was designed and built. Zigbee forms a local network to transmit environmental information and send data to the Internet through the network terminal transmission protocol to realize data interaction. Finally, the environmental data reaches the remote monitoring center platform through the edge computer network to realize the remote transmission of environmental information from the line.

4.2. Experimental Test

After the design plan is completed, the monitoring system is built, and all system functions are tested to verify the feasibility and fault tolerance of the detection system. We have successively conducted power consumption, anti-interference, fault monitoring recall, and system accuracy tests. By comparing Zigbee and Bluetooth, we check the efficiency of the system.

4.3. Data Analysis

As shown in Figure 11, Zigbee consumes very little power overall. The power consumed in the next month in a static state is very small, and even in operation, the power consumption of Zigbee has not dropped much. Compared with Zigbee, the power consumption of Bluetooth is obvious, the power consumption gradient drops in the static state, and it drops significantly in the working state. Zigbee has great advantages in terms of power consumption.

Theoretically, the received signal strength value is mainly related to two factors, namely, the signal transmission power and the distance between them. It is proportional to the signal transmission power within a certain range and inversely proportional to the distance between them. The theoretical formula is as follows:

where represents the transmission loss parameter; represents the distance between the transmitter and the receiver; and represents the signal strength received at a distance of 1 meter from the transmitter.

As shown in Figure 12, Zigbee has strong anti-interference ability, which increases the transmission distance. Although Zigbee’s signal transmission is lossy, it has little effect. The antijamming capability of Bluetooth is significantly lower than that of Zigbee. But obviously, the greater the transmission distance, the greater the signal loss. At the same time, with the increase of interference nodes, both Zigbee and Bluetooth have signal loss, but the loss of Bluetooth is significantly greater. Therefore, Zigbee has good anti-interference performance.

As shown in Figure 13, the fault response time has nothing to do with the number of fault points. Regardless of the point of failure, the system has the same response time and quick response. But the response to failures increases as the number of failure points increases. The larger the fault, the slower the system response, which has a lot to do with the time to clear the hidden danger of the system. Obviously, Zigbee has a short fault response time and quick elimination of hidden dangers, which is conducive to solving transmission line faults as soon as possible.

In Figure 14, the accuracy of the fuzzy range is higher. Once extended to the precise range, the accuracy of troubleshooting will definitely decrease. At the same time, the closer to the obstacle, the greater the error in troubleshooting. It is recommended to add manual troubleshooting channels while monitoring the system to reduce errors.

5. Discussion

The experiment selected four basic indicators for the transmission line: energy consumption, anti-interference, fault monitoring reminder, and accuracy test for testing. Through the comparison with Bluetooth, the performance of Zigbee is better than Bluetooth, showing good adaptability. However, it also has the problems of difficulty in locating short-distance obstacles and difficulty in identifying interference signals. For this reason, it is necessary to install interference detection equipment in the monitoring system to strengthen the investigation of interference signals. At the same time, the software and hardware of the system can be optimized, and the positioning accuracy of the system can be improved, so as to investigate related hidden dangers as soon as possible.

6. Conclusion

At present, the rapid development of overhead transmission lines and the increase in the number of equipment have led to an increase in inspection and maintenance workloads. The operation and maintenance of the existing methods cannot meet the demand. Real-time online monitoring of transmission lines is based on practical problems such as the construction process of the transmission line online monitoring system, the realization of functional modules, and the realization of communication protocol functions. Based on the operation of line health knowledge, it provides strong support for the operation and maintenance of the power grid and has practical significance for the advancement of line monitoring. But the experiment also has certain shortcomings. System power consumption still needs to be optimized, and the system’s adaptability to harsh environments needs to be further improved. The measurement accuracy of the system under extreme temperature and humid environment has not yet reached the expected target. The stability and compatibility of the software have not been rigorously tested, and the security of data storage has not been fully guaranteed. In the future, the status information and alarm information functions of the monitoring device will be further deepened, the value existing in the monitoring data will be explored, and the statistics and printing functions of the monitoring data will be enriched. And improve the ease of use of the system, and make a reasonable attempt to evaluate the stability of the monitoring device. It provides an important reference for the management of monitoring devices.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.