Leach (low energy adaptive clustering hierarchy) algorithm is a self-clustering topology algorithm. Its execution process is cyclical. Each cycle is divided into two phases: cluster building phase and stable data communication phase. In the stage of cluster building, the adjacent nodes cluster dynamically and randomly generate cluster heads. In the data communication phase, the nodes in the cluster send the data to the cluster head, and the cluster head performs data fusion and sends the results to the aggregation node. Because the cluster head needs to complete data fusion, communication with the convergence node and other works, the energy consumption is large. Leach algorithm can ensure that each node acts as cluster head with equal probability, so that the nodes in the network consume energy relatively evenly. The basic idea of Leach algorithm is to randomly select cluster head nodes in a circular way. It evenly distributes the energy load of the whole network to each sensor node in the network. It can reduce network energy consumption and improve network life cycle. Leach repeatedly performs cluster refactoring during its operation. This paper studies the parameter detection of wireless sensor network based on Leach algorithm on the on-chip embedded debugging system. Because the classical low-power adaptive clustering layered protocol (Leach) has the problem of energy imbalance and short node life cycle, this paper uses embedded debugging technology based on Leach algorithm and the residual energy and position of nodes in wireless sensor networks were tested for research. This Leach algorithm uses the concept of wheel. Each round consists of two phases: initialization and stabilization. In the initialization stage, each node generates a random number between 0 and 1. If the random number generated by a node is less than the set threshold T (n), the node publishes a message that it is a cluster head. Through the research on the parameter detection, the simulation results show that the research in this paper has good feasibility and rationality.

1. Research Background

Since entering the 21st century, the development of high and new technology is changing with each passing day. The development and fusion of wireless communication technology, sensor technology, embedded computing technology, and distributed information processing technology promote the generation and development of modern wireless sensor networks. Wireless sensor network is composed of wireless sensor network nodes distributed randomly in a monitoring area, which is used to monitor specific environmental information in a certain area. As a new technology in fields of communication, automation, and computer, wireless sensor network has the advantages of high monitoring accuracy, good fault tolerance, and wide coverage. It has been widely used in military defense, urban management, environmental monitoring, and hazardous sites. The field of remote control has been widely studied and applied. As the product of various high-tech integrations, wireless sensor network has become an active research branch in computer science and network communication science, which has attracted great attention from academia and industry. It is thought to have a major impact on the 21st century. One of these technologies is also one of the key research areas during the eleventh five-year plan period [1, 2].

Research into wireless sensor networks began in the late 1990s with the Defense Advanced Research Projects Agency (DAIRPA) funding the Network Embedded Software Technology (NEST) project. The University of California, Berkeley, has developed a wireless sensor network development system called Mote. Since then, DARPA has spent tens of millions of dollars a year on wireless sensor network technology. In early August 2002, the national science foundation (NSF), the defense advanced research projects agency (DARPA), NASA, and other 12 main research institutes at the University of California, Berkeley, jointly organized the “national seminar” on the future sensor system [3] and discussed the future sensor system and its development direction in the engineering application. The meeting discussed the realization of wireless sensor networks and the research of sensor data transmission, analysis, and decision making technologies, which are the same as the new sensor technologies based on nanotechnology and micromachinery, and represent the frontier direction of future sensor research and battlefield information perception. Structural monitoring, homeland security, the war on terrorism, and other applied fields have extremely important research significance.

In the following years, several laboratories at the University of California, Berkeley, continued to conduct in-depth research on wireless sensor networks and made some pioneering studies on wireless sensor networks from different perspectives. Many other American universities and research institutions have done a lot of work and made great progress in wireless sensor networks, for example, CENS (Embedded Network Sensor Center), WINS (wireless integrated network sensor), and NES (Network and Embedded Systems Laboratory)[4]. The Massachusetts Institute of Technology (MIT) is working on low-power wireless sensor networks with DARPA support. SPIN Sensor Protocols for Information via Negotiation is a protocol of MIT [5]. Auburn University has also received DARPA support for extensive research into self-organizing sensor networks. The Computer Systems Research Laboratory of Binghamton University has done a lot of research work in the design of mobile self-organizing network protocol and sensor network system application layer [6]. The Mobile Computing Laboratory at Cleveland State University combines wireless sensor network technology with IP-based mobile networks and ad hoc networks [7]. Meanwhile, American companies such as Intel, Crossbow, Freescale, and Ember have also participated in the research of wireless sensor networks. Some European companies like Philips, Siemens, Ericsson, and Chipcon [8] have studied wireless sensor networks. In March 2004, Japan’s Ministry of Internal Affairs and Communications held a seminar on “ubiquitous wireless sensor networks” [9], which mainly discussed the research and development, standardization, social cognition, and promotion policies of wireless sensor networks. NEC, OKI, and other companies have also launched relevant products and conducted some application tests. In China, the research on wireless sensor networks is mainly led by some universities and research institutions. Zhejiang University has established the wireless sensor network test group, which is specialized in the hardware implementation of wireless sensor network.

2. Leach Principle

2.1. The Algorithm of Leach

Leach algorithm is a low-power adaptive clustering routing algorithm. The algorithm is executed periodically. Each cycle is defined as a “round,” and each round is divided into two phases: the negotiation phase and the stabilization phase. The negotiation stage is also the formation stage of the cluster, mainly completing the selection of cluster heads and routing of nodes in the cluster, as well as the initialization of the algorithm. After selecting the node as the cluster head, it first sends a broadcast with its OWN ID and other information. The other nodes will receive a large number of broadcasts from different cluster heads. If the cluster head node receives the broadcast information of other cluster head nodes, it will be discarded directly. If the cluster node receives the broadcast information, the judgment signal strength, sure you want to join the cluster. It replies to the corresponding cluster head request to join the packets, after joining cluster nodes in the cluster. The members of the cluster head nodes are in the cluster maintenance information table. The communication time slot for members of the node distribution is in order to avoid confusion in the cluster communication. A time slot is one frame with the same length of time. The stabilization phase is the data communication phase of the cluster. Member nodes in the cluster will collect data and send it to the cluster head node in the way of polling according to the time slot allocated by the cluster head node. The cluster head node will fuse the received data first and then send it to the aggregation node. The longer the stability period is, the more effective the algorithm is. After the data is sent, a new start is made. The implementation is as follows.

The selection of cluster heads in Leach algorithm is random. There are two main factors that determine the cluster head: the number of rounds of the current algorithm and the number of cluster head nodes and the total number of nodes. There are no nodes that play a dominant role in the whole cluster process. Each node is identified by an algorithm, identified by itself, and added to the corresponding cluster. At the beginning of cluster establishment, all sensor nodes in the network are randomly generated into a random number between [0,1]. If the random number is less than threshold T (n), it is compared with threshold T (n), and then the sensor node corresponding to the random number is selected as the cluster head of the round. The broadcast message is then sent to inform other sensor nodes that if the random number is greater than the threshold T (n), it will not be selected as the cluster head. If the selected cluster head node has been selected as the cluster head, the value of T (n) becomes 0, so as to avoid the same node continuously acting as the cluster head, resulting in excessive energy consumption of the node. The formula for calculating threshold T (n) can be expressed aswhere is the percentage of cluster head in all nodes, is the number of election rounds, Rmod represents the number of nodes selected for cluster head in this round cycle, and G is not the node set selected for cluster head in this round cycle.

As can be seen from the above equation, as the algorithm cycle continues to advance, the number of nodes assigned to the cluster head will increase; that is, the value of Rmod will continue to increase, and the value of T (n) will also increase. Then, the probability of nodes not being chosen as cluster heads will increase. When only one node is not selected as cluster head, T (n) = 1; in addition, when r = 0 and r = 1/p, T (n) has the same value. When r = 1 and r = 1/P +1, the result of T (n) is the same. Then, after the algorithm executes a 1/P loop, the sensor nodes in the network return to the situation where cluster heads are selected with equal probability, such as repeating the loop. The Leach algorithm allows all nodes in the network to have a cluster head for a period of 1/P and only one chance to get the cluster head. After the 1/P loop, there is an opportunity to reselect the cluster heads. Therefore, T (n) is also the average probability that a node that does not act as a cluster head is selected in the RTH round.

Suppose that there are altogether N nodes in the sensor network and each time k cluster heads are selected, . The probability of nodes becoming cluster heads in r + 1th cycle is expressed by T (t), so the probability of cluster heads is in r + 1th cycle.

After r rounds, the number of nodes in the current round that have not yet become cluster head is .

If no node is selected as cluster head after r round, the above formula can be obtained. The average probability of r + 1th round node becoming cluster head is .

The average probability of being substituted into the above equation is.

Based on the above derivation, the following formula can be obtained:

When a node is selected as a cluster head, a notification message is sent informing the other nodes that they are new cluster heads. The noncluster head node selects the cluster to be added according to its distance from the cluster head and notifies the cluster head. When the cluster header receives all connection information, it generates a TDMA timing message and notifies all nodes in the cluster. In order to avoid signal interference from nearby clusters, the cluster head can determine the CDMA code used by all nodes in the cluster. The CURRENT phase of the CDMA code is sent with the TDMA timing. When the nodes in the cluster receive the message, they send data in their respective time slots. After a period of data transmission, the cluster head node collects the data sent by the nodes in the cluster, runs the data fusion algorithm to process the data, and sends the results directly to the sink node.

The stabilization phase is the data communication phase. The member nodes communicate according to the time slot allocated by the cluster head node. At the same time, other clusters are also performing intracluster routing. The neighboring nodes of different clusters can then generate cluster crosstalk at the same frequency to reduce crosstalk between clusters. The influence of cluster routing is based on direct sequence spread spectrum (DSSS) mechanism. In this way, the nodes in the cluster treat the non-self-group signal as noise and shield it, thus effectively avoiding the occurrence of signals between adjacent group nodes. Crosstalk: cluster heads communicate with sink nodes according to the CSMA method, and all cluster heads use the same spread spectrum code. Therefore, in the process of establishing a data connection between the cluster head and the sink node, it is first necessary to monitor whether the channel is occupied. If so, the cluster head node that needs to communicate needs to queue until spread spectrum code is not used and can preempt the channel to obtain data communication. Otherwise, the cluster header can use channels to communicate directly with the Sink node.

2.2. Key Technologies of Wireless Sensor Network

The premise of topology control of wireless sensor network is to eliminate unnecessary communication lines between nodes by controlling power and selecting corresponding backbone nodes under the condition of network coverage and connectivity and finally form efficient topology structure of data forwarding network. Excellent network topology control algorithm can effectively improve the efficiency of routing protocol and MAC protocol, provide good support for data fusion, time synchronization, and target positioning, save the energy of nodes, and improve the network life cycle. Topology control is very important for wireless sensor networks with limited energy.

The task realization of wireless sensor network protocol makes each node form a multihop data transmission network. Under the premise of effectively utilizing network energy and improving network life cycle, network bandwidth is effectively utilized to ensure service quality. At present, network layer protocol and data link layer protocol are the focus of research.

Data fusion technology is a combination of multiple data processing processes to make data processing more efficient and more humane process. However, because sensor nodes are prone to failure, the sensor network still needs data fusion technology to process multiple data comprehensively to improve the accuracy of information. According to the different information content, data fusion can be divided into two types: lossless fusion and lossy fusion. Lossless fusion means to save all the details and remove the redundant information. Lossy integration is when you save storage space and energy by ignoring details or reducing data quality. Data fusion technology can be combined with multiple protocol layers of wireless sensor networks. At present, data fusion technology has been widely used in target tracking and automatic recognition. In the design of wireless sensor networks, targeted data fusion methods are often the most beneficial in application design.

Time synchronization is the key mechanism of wireless sensor networks. When a network system performs some time-sensitive tasks or adopts the time-based MAC protocol, it needs to synchronize the nodes’ clocks. Clock synchronization lightweight synchronization and sensor network time synchronization protocol based on receiver and receiver are three basic synchronization mechanisms. The current location includes the location of the node itself and the location of the external target. Positioning accuracy directly affects the effectiveness of data acquisition. Due to the limitations of sensor nodes themselves, the localization mechanism must satisfy the robustness, self-organization, and energy efficiency of network nodes. Generally, nodes are divided into beacon nodes and unknown nodes according to whether the node is determined by its position. Beacon nodes can carry certain positioning equipment to obtain their own accurate position information. Unknown nodes take beacon nodes as reference points and use algorithms such as triangulation, triangulation, and maximum likelihood estimation to determine node position.

2.3. Embedded Debugging Technology

With the continuous improvement of chip integration and the increasingly powerful functions, the requirements for embedded software development are higher and higher, and the embedded debugging technology is also developing constantly. In the development of embedded debugging system, many debugging techniques are developed. There are significant differences between the different debugging techniques and the implementation principles that depend on them. This paper mainly analyzes and introduces the commonly used debugging techniques on chip. With the popularity of SOC technology, on-chip debugging technology began to appear in embedded systems. On-chip debugging techniques embed control modules in the processor. When a trigger condition is met, the processor enters the specified state [10]. In this state, debugging software running on the host can access various resources (registers, memory, and so on) and execute instructions through a specific communication interface outside the processor (the debug support module port). The basic idea of on-chip debugging is to add additional hardware debugging modules inside the processor. The debug software controls the operation and resource access of the processor through the debug module. There are many different implementations of on-chip debugging techniques. Currently, BDM (background debugging mode) and JTAG (joint test action group) are commonly used on-chip debugging techniques [11]. For users, the two technologies provide similar debugging capabilities, but there are significant differences in implementation principles and debugging standards. Their debugging criteria are described below.

MOTOROLA first recognized the development trend of on-chip debugging technology and implemented the BDM (background debugging mode) debugging interface for the first time on the 68300 series processor. Later, the company applied BDM debugging in a series of processors designed for PowerPC and ColdFire [12].

Taking ColdFire as an example, the implementation mechanism of BDM debugging function is briefly analyzed. ColdFire has many levels of internal buses. The debug module is embedded in the processor and can access the internal bus and CPU kernel information connected to the kernel and on-chip memory. Access to the internal bus allows the debug module to obtain address space and data information in the internal bus that cannot be accessed by external modules. The ColdFire debugging system supports three functions: BDM, real-time debugging, and real-time tracking. The basic principle of BDM (background debugging mode) is that when the processor stops running, the debugging software running on the host machine sends various instructions to the target system through the serial interface of the debugging module to access registers and storage. BDM mainly consists of two control registers: CSR and TDR [13]. The Configuration Status Register (CSR) is used to configure the operations of the processor and on-chip storage and also reflects the state of the processor breakpoint logic. The TDR (trigger definition register) is used to configure and control the operations of the hardware breakpoint logic in the debug module. ABHR (address breakpoint low register) and ABLR (address breakpoint high register) are valid breakpoint address ranges used to define. PBR (program count breakpoint register) and PBMR (program count breakpoint mask register) represent the PC breakpoint register and its mask. The DBR (data breakpoint register) and DBMR (data breakpoint mask register)represent the data breakpoint register and its mask [14].

JTAG (Joint Test Action Group) was established in 1985 and was originally developed by PCB manufacturer (printed circuit board) and IC (integrated circuit) test standards. In 1987, the organization proposed a new testability design method: boundary scan test technology. In 1988, IEEE (Institute of Electrical and Electronics Engineers) and JTAG agreed to jointly develop a boundary scan test architecture. JTAG was approved by IEEE as 1149.1 standard in 1990 and is also known as JTAG boundary scanning standard [15]. The standard defines the boundary scan structure and interface of the standard. At present, JTAG interface has become the standard debugging interface widely used in the world, and most existing microprocessors have JTAG interface. IC designers such as Intel, ARM, MIPS, and TI have implemented JTAG debugging interface.

The JTAG standard provides two main functions:(1)It is used for chip electrical characteristic test to detect whether there is a problem with the chip(2)For debugging, debugging is the program running on the chip

The JTAG boundary scan standard allows users to test and debug the hardware circuit with the JTAG interface chip. The working mode of the processor based on JTAG is generally divided into normal mode and debug mode. In debugging mode, the debugger stops execution, and the upper debugging software completes various debugging functions by sending debugging commands to the debugging module interface. Examples include setting breakpoints and stepping. When the processor is in normal mode, the processor is running or stopping.

A protocol conversion unit (either a hardware unit or a software implementation) is required between the target and the host. This unit converts the debugger commands into commands that the processor debug interface recognizes. When using JTAG debugging, the debugging interface is provided to the user through debugging software, the commands input by the user are received, and the execution results are given after processing. The JTAG debugger is built into the debugging module of the target chip. Debugging can be regarded as a means to access the target. JTAG debugging has the advantages of small dependency, no change in program operation, stability, and reliability. The debugging system in this paper adopts JTAG debugging standard.

3. Experiment on the Computer Simulation Software

3.1. Data Source

In order to better evaluate the performance of this paper in the field of wireless sensor, this paper conducts experiments on wireless sensor networks based on the classical Leach algorithm and research method. In order to make the experimental data and evaluation results more accurate and objective, the size of the wireless sensor network model was set as 100 M × 100  M, and 30 nodes were distributed in the region. These nodes are geographically random, randomly generated. The experiment was repeated 50 times and then averaged over all the results to get the final data.

3.2. Experimental Evaluation Criteria

Due to the limited energy of wireless sensor network nodes, the energy of nodes directly affects the life cycle of nodes. When the performance of the judgment research method is getting higher and higher, and it is more and more in line with the requirements of the network, there are several hard standards to judge different research methods.

3.2.1. Time Length of Node Failure

This paper analyzes the lifetime of network from three aspects: initial dead node, half-dead node, and final dead node.

3.2.2. Node Energy Consumption

This paper will record the total energy consumption of nodes in the network in real time and judge whether the corresponding research method is suitable for the network.

3.2.3. Energy Efficiency

When a node in the network transmits data information, we change the packet size and observe the energy utilization rate; the higher the utilization rate is, the more balanced the energy load will be.

3.2.4. Remaining Nodes

At run time, different algorithms have different number of remaining nodes at the same time. The more the nodes there are the less energy the network consumes.

3.2.5. Information Received by the Base Station

Obviously, the more information the base station receives in the end, the more beneficial it is to the work of the observers, thus improving the accuracy of the data.

3.3. Experimental Parameters

The specific parameters of the experiment are shown in Table 1.

3.4. Experimental Data Results

Firstly, the wireless sensor network method based on the classical Leach algorithm is simulated and the data obtained is recorded. Then, on the basis of Leach algorithm, the residual energy and position parameters of nodes are added to optimize the distribution of nodes, and the strategy that the greater the residual energy of nodes, the greater the probability of cluster head selection is adopted. The experimental results are based on the wireless sensor network parameters detection of the embedded debugging system on the chip. The comparison of the number of surviving nodes between the wireless sensor network using the classical Leach algorithm and the wireless sensor network using the research method is shown in Figure 1 with one computer simulation software. The comparison of the remaining energy of the nodes is shown in Figure 2.

After sorting out the experimental data, the node death of wireless sensor network using the classical Leach algorithm and the research method in this paper is shown in Table 2, respectively, and the node energy consumption is shown in Table 3, where the data in the table is the number of cycles.

3.5. Analysis of Experimental Data

Figure 1 shows the relationship between the number of viable wireless sensor network nodes and the number of rounds. When simulating the Leach algorithm, the first death node appears at 580. However, when the first node is present, the method is simulated at 915. The occurrence time of the first dead node is nearly twice longer than that of the classical Leach algorithm. In the comparison of 50% network node death time between the two methods, Leach algorithm takes 1098, but this method extends the time by 53.2% to 1682. In the comparison of network lifetime, Leach algorithm’s time was 1897. However, the method adopted in this paper takes 2678 times, and the failure time of the whole network is longer than the original. As can be seen from the comparison of simulation results in Figure 1, under the same simulation conditions, the research method has a longer network life cycle than the single Leach algorithm. Figure 2 depicts the relationship between the total residual energy consumption of network nodes and the number of rounds. The life cycle of the whole network is related to the energy consumption of each node. If a node consumes a lot of energy, it will soon fail. If the energy consumption of each node is similar, the network life cycle will be prolonged. The data in Figure 2 demonstrates this. In this paper, the method of residual current energy and maximum current energy ratio as parameters is more reasonable. Parameters are distributed between 0 and 1 and do not decrease with energy consumption.

The node death and residual energy consumption of wireless sensor network can be seen. In this paper, we improve the location of the aggregation nodes and find the nodes with the minimum sum of squares of other nodes in the distribution region. The center of these points acts as a sink node. The positioning method is reasonable. In the improvement of node energy parameters, the higher the probability of nodes is, the larger the residual energy becoming cluster head nodes is. The parameters of each election are more evenly distributed and are always less than or equal to 1. According to the simulation data in Tables 2 and 3, the superiority of the research method is verified, the node mortality is reduced, and the network life is extended.

4. Research Conclusions

Wireless sensor network (WSN) is a new technology, which has broad application prospects in military, environmental, medical, and civil fields. Wireless sensor networks have the characteristics of limited communication capacity, limited node energy, and limited computing capacity. There are many nodes, wide range of distribution, and dynamic network. Therefore, the research on wireless sensor network is of great significance, and the research goal is to improve the performance of wireless sensor network. As wireless routing algorithm is an important topic in wireless sensor network research, its performance directly affects the operation efficiency of the whole network and relates to the lifetime of wireless sensor network. Therefore, this paper first analyzes the Leach algorithm, adding residual energy to the current maximum energy node energy ratio parameter of the classical Leach algorithm and selecting cluster head reasonably. Secondly, the embedded on-chip debugging technology is used to detect the parameters of wireless sensor network based on Leach algorithm. Finally, the classical Leach algorithm and research method are used to carry out simulation experiments, and the feasibility and effectiveness of the method are verified. The results show that this method can balance the overall energy consumption of network nodes and improve the lifetime of wireless sensor networks.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was supported by Xinjiang Uygur Autonomous Region Innovation Environment (talent, base) Construction Foundation (Xinjiang NSFC Program Foundation 2020D01A132): Research and Implementation of Horizontal Well Inversion Optimization Interpretation Method, Jingzhou Science Technology Foundation (2019EC61-06), Hubei Science and Technology Demonstration Foundation (2019ZYYD016), Zhaoqing Science and Technology Innovation Guidance Project (201904030401), and Vertical Research Planning Project of Cloud Computing and Big Data Professional Committee of Higher Vocational College of Guangdong Institute of Higher Education in 2019 (GDJSKT19-18).