Abstract

Wireless sensor networks attract so much attention in current IoT-enabled industrial and domestic applications having either homogeneous or heterogeneous sensors deployed to acquire information of intent. WSNs are designed to operate using self-powered sensor nodes as their choice of application is geographic critical. Such nodes must support energy efficiency so that network longevity becomes high. Cluster head selection plays a crucial stage in a WSN architecture which mainly focuses on the minimization of network energy consumption. It groups sensor nodes in such a way that a sophisticated network cluster is formed to have enhanced life time besides a low power consumption. A popular clustering technique, known as LEACH and its variants, is found to be energy efficient compared to its counterparts. The authors propose a novel fully connected energy efficient clustering (FCEEC) mechanism using the electrostatic discharge algorithm to establish a fully connected network with shortest path routing from sensor nodes (SNs) to cluster head (CH) in a multihop environment. The proposed electrostatic discharge algorithm (ESDA) enhances network life time while attaining energy efficient full connectivity between sensor nodes. As a result of ESD, the dead node count is reduced significantly so that the network longevity is increased. In the end, simulation results exhibited improved performance metrics such as energy efficiency, dead node count, packet delivery, and network latency compared to certain conventional CH selection approach.

1. Introduction

Wireless sensor networks (WSNs) showcase highly significant contributions across several applications such as environment monitoring, seismic control, agriculture management, security surveillance, and many other similar areas. In recent days, research on WSN has been rapid growth for researchers because of its unique characteristics. WSN contains several fully connected sensor nodes linked wirelessly. Each and every sensor node is involved in to collect events of intent, acquire data, and helps in routing back to a base station (BS). Wireless communication occurs between numerous sensor nodes with the aid of a sensor node management system to enable network monitoring and data collection for all specific tasks, and then data transmission is performed by connecting all involved nodes through a master node to a nearby RF terminal [1]. The WSN communication between BS and network cluster is established using the traditional CSMA protocol; data transfer across sensor nodes (SNs) and BS occurs using a cluster head (CH) as indirect access [2].

The fundamental step in any of the sensor node models is the lifetime extension of the WSN. When the node is exhausted in many cases, it cannot be recharged or replaced by batteries. Positioning of nodes in traditional WSN efficiency is lesser due to the power demand and complexity. The synchronization process between nodes and CH is bounded by several network constraints that make the cluster head selection process tedious in the case of all prevailing clustering algorithms. Hence, multiple approaches are involved in making a decision for the CH. One of the decision making is carried out by the estimation of the energy. In general, a CH is chosen based on setting up a minimum energy criterion using energy thresholds to ensure full connectivity and reliability of the given WSN. Figure 1 portrays the structure of a wireless sensor network.

WSNs are similar to the conventional ad-hoc type of networks containing hundreds of nodes depending upon the network scale, interconnected with respective CHs to sense the parameters of intent, and involved in data acquisition and forwarding to BS which further broadcasts to subsequent RF points. The main challenges in network management are about framing a network area along with certain quality concerns such as scalability, reliability, and resource management [3]. These constraints are taken from ad-hoc networks and are included in the successful WSN topology management. From the recent research findings, it is evident that network clustering is the widely popular topology strategy working via the grouping of nodes to propel the CH and nodes for further management of intended tasks. These clustering techniques do mainly concentrate on efficient energy consumption to attain durable and reliable networks.

Data aggregation and clustering processes do mainly focus on reducing overall network energy consumption by eliminating a certain amount of transmission data and hence increasing network scalability and lifetime. Other algorithms such as artificial neural networks, reinforcement learning, and swarm intelligence help in the reduction of transmit data size using some of the distributive characteristics of the network.

In the interest of holding control over the dynamic nature of networks, efficient algorithms must be used to deploy a reliable and efficient sensor network. It is obvious that many of the research works incorporated machine learning techniques to eliminate redundant data being transmitted. These techniques bring forth various practical solutions to exploit resource utilization for prolonging life time of the entire network. In location-centric approaches, the CH selection is typically performed by choosing a desirable node close to a desirable location. The CH selection based on a typical location adds up computational complexity while locating a suitable sensor node, hence leading to poor selection accuracy and duplicated node selection.

1.1. WSN Cluster Architecture

The WSN architecture is always affected by several constraints such as fault tolerance, energy efficiency, and scalability, to name a few. The objective cluster head selection is to identify minimum transmit power from individual nodes by optimizing the location of sensor nodes. The edge-bearing sensors will always look inward for neighboring sensors to transmit data while utilizing minimal transmit power. However, the sensors present in between edges tend to provide full connection to nodes pointing towards edges of the network. As the network scale is sized up, it becomes a computationally challenging task to check all locations of each sensor to ensure optimality [4]. Hence, metaheuristic search techniques are employed to find the optimal solution. As a matter of fact, there is always a tradeoff between accuracy and complexity while looking for finding an optimum solution which in this case is CH selection and flexible network scaling.

A clustered architecture systematically groups the sensor nodes into clusters in which all nodes are administered by a single high-energy CH [5]. Each sensor in every network cluster involves in message transfer across corresponding CH, and the CH, in turn, conveys the gathered information to the BS, which is generally considered to be an access point (AP) attached to a wired network. A clustered network architecture helps the sensor networks with their inherent potential from data aggregation and transmission.

In the hierarchical approach of network routing, cluster determination is generally worked out with respect to the energy preserved by the sensors and geometrical closeness between each sensor to the respective cluster head (CH). The CH of each cluster is sufficient to convey complete information to the BS whereas other sensor nodes merely pass their signals to the CH. Clustering diminishes the need for keeping a centralized node to synchronize all the connecting nodes. Clusters always form an integral part of current wireless sensor networks. The sensor networks perform well with the help of clustering when compared with other conventional routing algorithms, making the data communication flexible with an extended network life time.

WSNs can be organized on an ad-hoc basis comprising of a sufficient number of sensor nodes. As clustering helps to preserve the communication bandwidth, the dimension of the routing table gets downsized. Clustering eliminates the need to preserve the given network topology. In the case of a clustered WSN, overall energy consumption is reduced. Given the forecasting behavior of the network, the battery life of every sensor node is enhanced besides which the network upscaling is possible in case of rightly performed clustering. The major design features considered while setting up a network clustering are nothing but the magnitude of clusters, intracluster contact, sensors and cluster head portability, sensor variety and its position, miscellaneous levels, and overlaps. The crucial challenges of clustering include connectivity, rotating the function of cluster heads, medium access control layer drawing, sensor duty cycle, the best possible cluster dimension, and sensor harmony with peer nodes. The CH accumulated data are updated every time as there is a movement from one node to the other.

As it is seen from research records, many of the existing research studies on WSN utilize a location-based approach for clustering, and optimized locations have distinctively arrived during the CH selection phase. Therefore, clustering basically locates the CH at the center of the clustered nodes. Firstly, the major problem with the selection of optimum CH location is its unreliability as the CH location differs each time from the original position. Clustering approach does also cause other challenges such as complexity in computing nearby node location and energy after fixing a CH. This maximizes the network energy, thereby reducing its life time [6]. Secondly, any misconception of CH selection leads to drastic changes with network performance. This happens due to diversity in CH node selection from given search space. In such a case, the optimal CH location may vary from its original location leading to confused CH selections. Finally, when a CH node is selected from nearby clusters, it will have an optimized cluster head node location among available clusters with a CH count less than the total cluster node members. Thus, all the finite characteristics of a sensor network shall deem to be fully adapted during the clustering process regardless of location information.

The energy expenses in a WSN comprises of firstly sensing the intended parameter and secondly connecting the data to BS. The fact is that WSNs consume maximum energy during data transmission over the sensing and processing stages which leads to draining of power sources rapidly, leaving many nodes to die at a fast pace, and as a result, reduces the network life expectancy. However, the network fails when even a single node in the chain fails to involve in the data transmission process with its head node. Hence, it is firmly ascertained that WSNs are hypersensitive and highly vulnerable to working energy than any of the standard wireless networks. In case of a direct transmission from the sensor to BS, the sensor nodes get easily exhausted and become dead soon which is not a good sign in a WSN. The life span of a network shall be enhanced by employing power-efficient clustering architecture while carefully choosing sensor quality, network area, and the number of WSN nodes. It is concluded that WSN clusters do support careful power consumption and efficient energy utility with an added advantage of network longevity and packet delivery. Altogether, this approach helps to enable the remote data acquisition of physical processes from a global geometry and make that available in the Internet cloud through popular IoT technology to take deep leaps of wireless connectivity across men and machines as artificial intelligence (AI) initiative.

1.2. Benefits of Clustering

Clustering enhances the lifespan of WSNs by providing high energy efficiency. The benefits of WSN clustering are listed as follows:(i)Highly energy efficient.(ii)Cumulative data or condensed information is conveyed directly across CH and BS to reduce the number of broadcast nodes connecting BS.(iii)SNs aggregate data and send it to CH, which further combines different data into packets and deliver to the BS. This reduces the energy use of individual sensor nodes as CH alone communicates with the BS and not any other non-CH sensor nodes.(iv)Redundant messages circulating among SNs are eliminated since the SNs are required to relay messages or information only to the CH.(v)It is not essential to preserve the topology in a sequence since SNs make contact only with a particular cluster.(vi)By adopting a TDMA schedule to connect the clusters, the CH passes messages to the BS at allotted time slots, effectively turning down the crashing of packets.(vii)Due to TDMA scheduling, battery life is extended.(viii)Network becomes highly scalable unlike a conventional network having no clustering.

The key features involved in cluster-oriented routing protocols are reliability, fault tolerance, scalability, information accumulation, etc. This study focuses on lowering the energy usage of nodes during data transmission across different clusters to the BS simultaneously. At present, researchers are working on low complex search algorithms using nature-inspired metaheuristic models to solve multiobjective combinatorial optimization problems. This article proposes such an algorithm to boost energy efficiency while reducing search complexity in a large-scale WSN scenario.

It is evident from various literature sources that wireless sensor network architecture mainly focuses on achieving desirable energy efficiency whereas careful management is carried out on constraints, namely, heterogeneity, mobility, energy saving, life span, etc. A well-placed group of nodes is algorithmically organized to form a cluster with an objective of raising the energy efficiency of wireless sensor networks. Here, the key deciding parameters are quality of service, load balancing, and energy minimization. Each of the nodes does connect to a cluster head which then collects and conveys sensors’ data to the destination. CHs initialize transmission whereas other nodes in the network are members which direct data from the node to BS wirelessly. In the clustering approach, any information sent through resource-constrained nodes directly causes energy depletion, inefficiency, and interference. Some of research works are already carried out based on the survey of WSN clustering. Several clustering methods such as HEED, BEENISH, FLOC, and LEACH are proposed with potential varieties and extensions considering equal and unequal clusters.

An improved LEACH protocol [7] known as LEACH-Impt is reported in the literature which shows better performance than traditional techniques. LEACH-Impt contains various disjoint paths inside the cluster nodes as a constituent of the routing topology. An optimal path is selected to eliminate power consumption and residual energy when the routing hops across nodes. The proposed LEACH-Impt has less energy consumption than its standard counterparts. Random CH selection is one drawback of this approach causing losses in collected information and hence transmitted.

An efficient fuzzy logic for CH selection [8] has been proposed to choose the best cluster head (CH) which utilizes proficient co-ordination between the nodes in WSN by incorporating the fuzzy inference system. In the proposed FBECS, the probability is distributed to every node of the network with respect to the distance associated using fuzzy logic. The performance measures of FBECS show better load balancing, stability, and extended lifetime. A new CH selection is performed by a cluster chain weight metrics (CCWM) approach based on the ranking procedure [9]. The node with a higher value of position metrics within its network range is considered CH. It is concluded that the proposed CCWM method offered flexible weight factor changing characteristics which enhanced the network throughput.

Song and Zhao [10] presented an unequal clustering energy-efficient algorithm which can be applied for large-scale WSNs with load balancing. In an improved optimization algorithm, min-max ACO identifies the optimal path between nodes and CH. This conserves energy and extends network lifetime. In an effort to establishing optimum routing of nodes [11], a high energy probability criterion is set for a CH by calculating the energy-aware threshold function. To discover the optimal path from the node to sink, eminent metaheuristic algorithms and ant colony and particle swarm optimizations are used together. When ACO performs tracking of data from its surrounding CHs, a few ants (CHs) are synthesized as sinks. Finally, PSO is used to find the shortest routing path.

Another research work on WSN cluster head search used the butterfly optimization algorithm (BOA) for selecting optimal nodes as CH from the group of nodes available [12]. The BOA considers parameters such as distance and remaining energy to select the best CH node. A routing protocol is optimized with the ACO algorithm to improve the performance of WSN. Literature findings again put forth a novel ESO-LEACH algorithm [13] for CH selection by adopting an improved set of rules and nodes. The proposed ESO-LEACH outperforms the shortcomings of conventional LEACH protocol by maintaining consistency and elects the best CH at each iteration.

The results of another load-balanced PSO-based routing protocol affirm that the proposed approach suits well for optimum CH selection and routing both for unicast and multicast transmissions as well [14]. The proposed work suggests MPAR and EMPAR as solutions to both clustering and load balancing problems as well. The aggregated information from non-CH nodes is forwarded from CH to sink using the compressive sensing (CS) technique. Thus, the energy consumption and network lifetime are improved using these proposed algorithms. There are a few works available in the research repository for WSN optimization using metaheuristic approaches such as the glow worm swarm intelligence and fruit fly algorithm. The purpose is to stress upon energy-aware CH determination. Besides CH search, constraints such as node energy, latency, and distance are considered to reach the design goal. Out of all, quality of service (QoS) is the major performance metric taken into account to elevate overall network performance.

In another research approach, the best CH is selected using the hybrid of an artificial bee colony algorithm during the exploration of nectar [15] and another monarchy butterfly optimization algorithm in the exploitation phase by restraining the tradeoff between exploration and exploitation. This suggested HABC-MBOA algorithm swaps employee bee with butterfly adjusting operator, preventing premature trapping. The proposed approach eliminates potential CH overhead and sensor node mortality. A novel algorithm called Lines of Uniformity-based Enhanced Threshold (LUET) and rotation-based LUET is proposed [16] to reduce the average isolated nodes in WSN. This approach considers remnant energy and closeness to its lines of uniformity. Because the rate of depletion of nodal batteries fluctuates greatly with the distance and energy discharge rate, LUET shows to be useful.

The genetic algorithm (GA)-based LEACH protocol [17] is introduced in WSN for the optimal selection of CH nodes. The proposed GA-based approach utilizes the optimal likelihood of a node to discover CH with a least amount of energy usage for the first round’s completion. LEACH-GA outperforms among all other LEACH protocols. A quasioppositional butterfly optimization algorithm (QOBOA) CH selection protocol [18] is proposed to elect CH nodes. This protocol has a capability to provide an optimality solution, and convergence with the concept of oppositional learning is presented. In an effort to encounter the load balancing problem in WSNs, a virtual grid-like architecture [19] is proposed. Network routing is carried out by fixed-parameter tractable approximation algorithms popularly known as RFPT which helps to manage the energy involved in the transmission of data packets.

A PSO-based double CH selection [20] approach has been presented to eliminate the re-election cycle of CH by electing two main-slave CH interact with each other in the transmission stage. The experimental results reveal that the death of node is delayed than existing methods. Another cluster-based information gathering system [21] is proposed to improve the latency and also reduce packet loss during data transmission. Minimal spanning tree (MST) is utilized during the aggregation phase and priority-based time slots during data transmission between CH and nodes. This ensures the reduction in packet loss and improves the throughput.

Further exploration on CH selection leads to the finding of differential evolution and simulated annealing (DESA) algorithm [22] applied collectively to achieve performance upgradation. The proposed algorithm extends the network lifetime by reducing perishing nodes associated with the CH. DESA has a fitness feature that takes into account your current fitness level that are remaining energy and node distance.

A novel CTEEDG protocol [23] is proposed to the reduce rate of dead node formation and to enhance the lifetime of wireless sensor nodes. This protocol applies fuzzy techniques on the broadcast information received using Hello messages to optimize CH selection. After selecting the CH node, a tree-based route search is performed to find the best route to reach the sink. This proposed work offers a high throughput and low energy consumption. A PSO technique is used in the CH selection process [24] as it lowers the cost to find the best CH position. According to the objective function, PSO locates a CH by determining an ideal position and lowers the communication delay.

Several fuzzy logic designs were implemented for the same CH selection and routing process such that it shall be applied for a large-scale network yielding improved performance on reliability and scalability. A fuzzy model [25] using the shuffled frog leaping algorithm (SFLA) is developed to set prominent inputs. The proposed proper fuzzy-SFLA (PF-SFLA) helps identifying application-specific input parameters. This approach enhances CH identification and hence the network lifetime. The energy-efficient approach based on conventional and swarm intelligence methods is implemented to preserve the energy of the WSN during transmission. But limited energy sensors are expensive and tedious to achieve better network performances. Thus, a self-configuring approach is implemented in routing to enhance the transmission rate, optimal energy consumption, and optimal route selection.

3. LEACH Technique

The acronym LEACH denotes low energy adaptive clustering hierarchy. Currently, one of the well-proven network routing and clustering protocols operates with the time division multiple access (TDMA) protocol. The fundamental objective of this protocol is to conserve energy during the active performance of a sensor network. Generally, WSN is a collection of sensor nodes that are interconnected in a particular manner, providing a strong impact on many monitoring applications in daily life. The disparities found in energy consumption in an ongoing transmission certainly drain the sensor battery.

LEACH uses the distributed algorithm to organize the sensor nodes into a cluster. Every cluster identifies its own CH which establishes a transmission link to BS (sink node). The basic architecture of LEACH is shown in Figure 2. In LEACH, the CH nodes aggregate data received from clustered nodes, accumulate and forward to the sink. It is proficient in the self-organizing capability of the cluster and by its adaptability in nature. The LEACH protocol is recognized by a round concept. For each round, there will be new CH to initialize data transmission. Each round of the LEACH protocol involves two main phases, namely, the setup and steady state phases, respectively.

The setup phase contains CH selection and formation of cluster stages whereas only the transmission of data takes place in the steady state phase. In order to select a typical head node, all nodes in the network broadcast their individual probability at the beginning of every round. The formula to calculate CH to BS distance is shown as follows:where x and y form the BS position in the search space.

The optimum number of clusters is calculated using cluster optimization techniques as follows:where Tn is the total number of nodes, and L is the network length/width.

Initially, CH is selected based on the available energy level. CH will be sending the advertising message to all its member nodes on the CSMA mode. Upon receiving signal strength information from one CH node, the remaining nodes start to determine a new CH to lead the upcoming iteration. Then, the CH node schedules TDMA slots for data packet transmission and coordination within the cluster. As per the TDMA slots, sensor nodes update concerned data to the destined CH which, in turn, aggregates all nodes’ information. In the uplink process, CH to BS communication is carried out by means of spread spectrum modulation.

A unique spreading code is used by every cluster to communicate with BS to avoid intercluster interference. After completion of data transfer in round one, the network enters into the setup phase initializing CH for the next round and follows its routine iterations. Figure 3. illustrates the LEACH process flow. Apart from its advantages such as energy efficiency and network life, it maximizes network coverage with minimum latency. The success rate of LEACH is determined by the following criteria:(i)Distance between CH to BS decides the number of hops and energy dissipation(ii)Energy must be low for nearer nodes and high for farther nodes

4. Proposed FCEEC

In order to explore the full connectivity of nodes and to have successful packet delivery CHs to BS, the LEACH protocol is amended as fully connected energy-efficient clustering (FCEEC) with an added novel electrostatic discharge algorithm (ESDA) [26]. In all test cases, ESDA has an early convergence of more than 60% accuracy to the global optimum whereas its competitors have convergence only around 20%. This shows that ESDA is 33 times faster than its counterparts just at 500 iterations. Beyond 2,000 iterations, the convergence reaches more than 85% of the accurate global optimum solution. Thus, ESDA has lower-order time complexity compared to its counterparts which are mentioned in the results of the manuscript.

Obviously, there is a delta computation needed as we used a hybrid algorithm. Since the computation is performed before data transfer begins, we do not face much loss of time. As the computation is fast to find both CH and shortest routes, the proposed FCEEC ensures that all necessary nodes are active such that the complete data are delivered across the BS from even the farthest node. Here are the sequential flows of the ESD algorithm.Step 1: firstly, we initialize with a random size of the object denoted as “Obj_Size,” i.e., the total number of electrical equipment in the design space(i)Position of nodes decides the fitness value. If fitness value is more, the equipment is safe from ESD(ii)Besides, each equipment contains a counter to account for the maximum number of attacks. This is called the initialization stageStep 2: secondly, the initialization process is repeated “Max_Iter” times to find a solution for the identified optimization problem(i)Three objects (source, load, and victim) are randomly identified in every iteration, and the best is kept at first(ii)n1” random number is generated. If n1 > 0.5, only two objects are involved; otherwise, all the three do take part(iii)In the case of two objects, if the least fitness object moves towards the best fitness one (object 2 to object 1), it is represented asin whichp2_update is the updated position of object2. p1 and p2 are the past locations of both objects. α1 is an arbitrary number with mean and SD values 0.7 and 0.2, respectively(iv)In this case, as object 2 gets closer to object 1, an ESD affects object 2 (victim). This is called direct ESD incidence(v)In cases where n1 < 0.5, three elements participate to cause an ESD. Assuming that the third object moves towards other two elements, thenwhere α2 and α3 are the random numbers of normal distribution with mean = 0.7 and SD = 0.2(vi)Object 3 is called as the victim of ESD if it gets closer to objects 1 and 2(vii)This is called indirect ESD. During each attack on the victim, its counter is incremented onceStep 3: now, the boundary of search space is checked to put back out bound elements insideStep 4: now, each of the objects is checked(i)If an object suffered more than three times from ESD, the object is fully damaged, and the search space is updated with a new random object(ii)else if ESD on an object is ≤ 3, another random number ‘n2’ will be generated(iii)if n2 takes a value lesser than 0.2, a portion of the object is lost, and it is replaced(iv)otherwise, the object is safe from ESD.Step 5: after combining new nodes with the previous one, the next iterations are repeated.

Basically, ESD is an electrical discharge phenomenon which occurs commonly across circuits due to the sudden surge of power or capacitive coupling effects. As in Figure 4, the ESD model is depicted with the help of three basic elements such as source, load, and victim. Though it is unwanted, the advantage of quick energy coupling is used to model the shortest path to finding problems effectively.

As the coupling is due to the capacitive effect between two conductors, it is equivalent to connecting one node to another nearest neighbor such that the packets are quickly transferred across the network. In this work, the proposed ESDA optimizes network path selection for the CHs situated far from the BS since the distant CHs take multiple hops across neighboring CHs to carry packets to the BS.

ESDA has significantly low computational complexity compared to other search algorithms as found from its base paper results. It consumes considerably a low count of iterations (no. of rounds of execution) to find the best global solution. When it is combined with LEACH, there is a slight computational overhead (while finding the shortest route through nearby active nodes to the cluster head) which is insignificant when compared to other combinations of LEACH plus optimization algorithms, namely, BO-LEACH, and PSO-LEACH.

5. Results and Discussion

A typical WSN is constructed using the FCEEC algorithm to validate the energy efficiency and packet delivery across CH and BS. The simulation results are compared with LEACH [27], LEACH-C [28], BO-LEACH [29], and ESD [30] algorithms. The authors have considered the following design parameters which are as listed in Table 1.

Simulation results are described with respect to energy retention, dead node count, packet delivery, and network latency.

The variation in the total energy of all the nodes after every round is shown in Figure 5. It is observed that after 500 rounds of execution, the proposed FCEEC increases total energy savings by 81.25% than LEACH, 68.75% than C-LEACH, 46.87% than BO-LEACH, and 30% than ESD. The energy retention further rises up to a maximum of 96% compared with LEACH, 92% than C-LEACH, 69.6% than BO-LEACH, and 48% than ESD when the number of rounds reaches 1,000. Energy retention analysis after 500 and 1,000 rounds of iterations for various algorithms is shown in Figure 6.

The dead node count after every round of iteration is shown in Figure 7. It is identified that after 500 rounds of execution, the proposed FCEEC reduces the ,dead node count by 55.5% from LEACH, 46.15% from C-LEACH, 33.3% from BO-LEACH and 20% from ESD. The FCEEC reduces the ,dead node count by 25.8% from LEACH, 20.69% from C-LEACH, 16.86% from BO-LEACH and 6.75% from ESD after 1,000 rounds of execution.

The number of dead nodes after 500 and 1,000 rounds of iterations for various algorithms is shown in Figure 8.

The data packet delivery from CH to BS after every round of execution is shown in Figure 9. After 500 rounds of execution in the proposed FCEEC, the packet delivery is raised by 33.52% than LEACH, 28.25% than C-LEACH, 23.65% than BO-LEACH, and 13.78% than ESD. The proposed FCEEC improves the packet delivery by 32.28% than LEACH, 28.24% than C-LEACH, 24.71% than BO-LEACH, and 17.13% than ESD after 1,000 rounds of execution.

The number of data packets generated will reduced if the number of dead nodes increases. In LEACH, more than 80 nodes will not be alive after 650 rounds of iteration. Hence, the number of packets generated will be very less compared with earlier rounds. The same scenario occurs when using other algorithms also. The packet delivery comparison after 500 and 1,000 rounds of iterations for various algorithms is shown in Figure 10.

The network latency after every round of execution is shown in Figure 11. After 500 rounds of execution in the proposed FCEEC, the network latency is raised by 13.71% than LEACH, 11.17% than C-LEACH, 7.07% than BO-LEACH, and 4.12% than ESD. The proposed FCEEC improves the network latency by 66.46% than LEACH, 61.07% than C-LEACH, 48.71% than BO-LEACH, and 34.92% than ESD after 1,000 rounds of execution.

The latency reduction comparison after 500 and 1,000 rounds of iterations for various algorithms is shown in Figure 12.

6. Conclusion

Thus, it is observed from the above results that the ESDA-based FCEEC algorithm facilitates optimum CH-BS placement and the shortest path discovery for full connectivity of nodes. The proposed method improves the packet delivery rate, and most importantly, the energy efficiency of nodes is increased significantly while comparing with the generic LEACH and other conventional methods. Hence, it is concluded that the newly inducted FCEEC results in the optimization of WSN output parameters in terms of reduction in node energy by 96%, reduction of dead nodes by 25.8%, increase in the packet delivery rate by 32.28%, and the network latency by 66.46%, respectively.

Data Availability

The data used to support the finding of this study are available from the website https://www.mathworks.com for LEACH protocol basic simulation and study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.