Abstract

Regarding the preservation of human life, this is one of the secondary research goals that the WSN strives to achieve. There have been the development and testing of a number of cutting-edge algorithms and protocols, and many more are now in the process of being created. Among the aforementioned methods, clustering is one that is particularly well known for its capacity to lengthen the lifespan of the web. It is also one of the solutions with the lowest possible price. Two strategies have been proposed as a means to extend the life of the WSN, improve its functionality, and maximize its usefulness. The Energy-Aware Ant Colony Optimization (EAACOP) protocol, which is derived from the Time Domain Protocol, was at first devised with the goal of reducing overall power consumption. The mission of EAACOP is to build an energy-efficient group with the highest possible number of nodes, choose the most qualified individual to serve as the group’s head, restrict the amount of space available for intragroup communication, and extend the lifetime of the web as much as feasible. By using the pillar -means grouping strategy, we are able to generate groups and guarantee that every node is covered to the greatest extent that is practically possible. During TDMA-based MAC protocol data transmission, it is used to determine who would make ideal group leaders and to evaluate the amount of energy that is consumed. The viability of the protocol is shown by simulating it and comparing it with contemporary protocols; the conclusions reached are predicated on QoS performance criteria. Specifically, as compared to older methods, the packet distribution ratio, throughput, and residual energy are raised, while the packet loss ratio, network latency, edge-to-edge delay, and battery power consumption are lowered. This results in an overall improvement.

1. Introduction

A wireless sensor network, also known as a WSN, is a network of sensors that are powered by their own internal batteries and are able to sense their application area, do some basic computations, and send the findings to a controlling body known as a base station. The limited compute capabilities and communication range of the network devices are to blame for both the fact that the network devices are powered by batteries and the fact that the devices or sensor nodes (sensodes) are frequently removed after being deployed. This is because of the fact that the network devices are frequently abandoned after deployment. In the vast majority of instances, this is because the sensor network programme being used has a hostile domain. The wireless sensor network is, in essence, an amalgamation of many technologies, including embedded systems, networking, and MEMS technology (Micro-Electro-Mechanical Systems) [1]. The terms “embedded systems” and “networking” refer to self-organized and power-aware communication that coordinates higher-level tasks, respectively; the term “MEMS” refers to miniaturized mechanical and electromechanical devices for microfabrication that take advantage of dense spatial and temporal coupling to the physical world that is a benefit of embedded systems; and the term “embedded systems” refers to small, untethered processing, storage, and control that allows many devices. In today’s world, wireless sensor networks are no longer limited to just detecting and transferring the data they gather to a base station. Instead, they are now capable of performing a far wider range of functions. However, in order to construct the vision for wireless sensor networks, wireless networks must be combined with sensing and actuation in order to establish an environment of ubiquitous computing that enables the fine-grained monitoring and control of the surrounding environment [2].

The term “wireless sensor networks” has been replaced by “wireless sensor and actuator networks,” which indicates that in addition to reporting available data to the base station, the network should also be capable of performing the required actions [3]. This change was made because the term “wireless sensor networks” had become outdated. The network should thus no longer be referred to as “wireless sensor networks” after this development. For example, the system ought to turn off the water heater if it has been left on for more than the customary four hours each day, supposing that no one has noticed or that no one is at home. This is because leaving the water heater on for longer than this wastes energy. It is the responsibility of the system to switch off the water heater if it is left on for longer than this. The following are some possible applications that wireless sensor networks might be used for: (i)Healthcare: the applications of WSN in healthcare vary from remote healthcare and surveillance to monitoring vital statistics, sudden fall detection, rehabilitation supervision, and rehabilitation supervision [4]. In addition, WSN may be used in rehabilitation supervision. In addition, WSN may be utilized to monitor patients who are undergoing rehabilitation. These applications are employed in assisted living environments or ambient aided living environments for clients who are either old or critically ill. In these types of applications, sensor devices may be fixed in certain locations within a residential area, or they can be worn by older people in the form of a wristwatch, band, or jacket in order to monitor the environment around them(ii)Agriculture: one application of wireless sensor networks in agriculture is precision agriculture, which may assist with the forecasting of illnesses and the monitoring of livestock [5]. Another use of wireless sensor networks in agriculture is in the medical field

Recent applications have shown that reducing congestion in wireless sensor networks (WSN) is one of the most important difficulties that must be overcome [6]. Congestion may be caused by a number of different factors, including but not limited to packet collisions, data rate fluctuations, buffer overflows, transmission channel contention, the use of a many-to-one data forwarding mechanism, and changes in a dynamic time [7]. According to the results of a study, the length of time spent sitting in traffic has a substantial effect on the degree of overall quality of service that a company provides. The management and control of congestion in sensor networks have been highlighted as an important research issue that has grabbed the interest of academics as well as industry experts. This topic has attracted the attention of both academics and industry people. The recent increase in the number of connected devices has brought to light a requirement that has been brought to light as a consequence of this recent development. The following is a list of some of the goals that the project intends to achieve.

When utilizing a protocol that is based on layers, it is possible to take use of the advantages of asymmetric networks in selecting the optimum reverse route. This is made possible by the fact that it is practical. As a consequence of this, there will be an increase not only in the delivery rate but also in the energy efficiency.

By estimating the link capacity and the packet service ratio in a busy network, it is feasible to detect congestion, manage it, and, eventually, prevent it from occurring. This is achieved by picking a dynamic alternative route of nodes in a busy network that is based on layer design for various traffic. This is done in order to get the desired result. Congestion may therefore be avoided using this method.

If one utilizes an algorithm for data delivery-centric routing that ensures reliable data transmission, it is possible to minimise overhead while simultaneously prolonging the usable life of the sensor network. This is made possible via the use of an algorithm. Creating a routing algorithm that is centered on the transmission of data might be one way to do this.

2. Proposed Workflow

Data transmission through wireless sensor networks requires a high amount of battery power consumption in addition to a high quality of service. As a direct consequence of these requirements, the amount of battery power consumption in WSNs is increased. According to the findings of the research, the factor that has the greatest influence on the overall lifetime of the wireless sensor network is the amount of battery power that is used by the wireless sensor node. The Energy-Aware Sensor Node that has been presented reduces the amount of energy that the sensor node consumes [8]. This is done in order to meet the requirements for these criteria. In this experiment, it is used to enhance the pillar -means Ant Colony Optimization via the TDMA-based MAC protocol. TDMA stands for time division multiple access. Clustering (grouping) by using the pillar -means clustering (grouping) algorithm, energy consumption evaluation by using the TDMA based MAC protocol, and the proposed work is divided into three blocks [9]: clustering (grouping) by using the pillar -means clustering (grouping) algorithm, and the proposed work is divided into three blocks [9]: clustering (grouping) by using the Ant Colony Optimization algorithm to select a group head and the proposed work.

As seen in Figure 1, the process of the proposed EAACOP protocol is described. This paper proposes the use of the pillar -means grouping algorithm to divide the sensor web into a number of groups of nodes; the Ant Colony Optimization algorithm is used to generate the best-fitting group head for each group; and, finally, the use of the TDMA-based MAC protocol to evaluate the power consumption of sensor nodes [10] while information is being transmitted from one group node to another during the routing process.

The -means algorithm, created by MacQueen and colleagues in 1967, is one of the most well-known, extensively used, and time-efficient techniques for grouping data. This method is a form of an unsupervised algorithm that may be used to solve a variety of grouping issues, such as those described above. This method is essential for dividing a network into a number of groups, which is what it is best at. In the process of partitioning a network into groups, it is discovered that the spacing between all of the sensor nodes and the head of the group inside the group is a relevant parameter. The primary goal of this approach is to reduce the aggregate of the space between each sensor node in the group and the group’s centroid node as much as possible. The flow graph of the -means algorithm is seen in Figure 2. The following is an example of the -means algorithm.

Step 1. To organize JnJ count of nodes into the JkJ count of groups, pick JkJ count of arbitrary group centroids at arbitrary places at the start of the process.

Step 2. Calculate the amount of Euclidean space that exists between each of the sensor nodes and the centroids of the whole group. Following the distance estimate, JkJ number groups will be formed by assigning each of the sensor nodes to the centroid of the group that is immediately next to the sensor node in question.

Step 3. Recalculate the locations of the centroids of the group in each group using the information from Step 2.

Step 4. If any changes are necessary to the location of any of the group centroids, repeat Step 2 until the desired result is achieved. In every other case, group formations are created.

The initial centroid nodes are generated using the -means clustering algorithm in a random manner. The -means method may fail to take into account the fact that the centroids are dispersed across the search space. It is possible that the first centroid nodes are placed relatively close together. The first group of centroids, which are created at random, may be quite near to the final group centroid. It is possible, however, that this will not always be the case. If the starting centroids are significantly different from the final group of centroids, it is possible that the grouping will be wrong. There have been a variety of -means optimization strategies proposed in order to tackle the group initialization issue of the -means grouping algorithm [11].

To maximize the location of initial centroids for the -means clustering method, [12] developed the pillar algorithm, which is described in detail below. The pillar -means grouping algorithm is based on the basic -means grouping algorithm, which contains the fundamental operation. -means grouping is derived from the notion of the pillar installed on the concrete building, which is designed to sustain the pressure of the roof by the pillars that are spaced at a certain distance from one another [13]. Figure 3 depicts the positioning of two, three, and four pillars with the goal of alleviating the pressure dissipation of various roof designs consisting of different locations.

Figure 3 gives the illustration of neighborhood pixels.

2.1. Determining Initial Centroids

Primarily, in order to determine the initial centroids, the grand mean for the whole set of data points is computed and used as the gravitational center of the data distribution [14].

Using the distance measure (in this case, let us call it ), each data point may be measured concerning the grand mean. Data points having the highest distance measure in the step will be chosen as the first candidate for the initial centroid (i.e., the data point with the most significant distance from the grand mean of all data points). It will be elevated to the first initial centroid if does not seem to be an outlier.

The distance measure (in this case, let be ) between each data point and the newly chosen first candidate of the initial centroid is recalculated in step after picking the first candidate of the first centroid. From this point on, the creation of the cumulative distance measure (DM) is initiated. Attribute to DM the distance measure that was computed in step (see below). The same process is used to choose a candidate for the second initial centroid; however, this time, the letter DM is used instead of the letter [15]. When selecting the second initial centroid candidate, the data point with the greatest distance measure (DM) will be chosen as the second data point. It will be elevated to the second initial centroid if is not regarded as an outlier in the distribution.

is picked by recalculating (where is the current iteration step) between each data point and the previously selected initial centroid candidate ct 1 to choose the remaining initial centroid candidates [16]. is then added to the cumulative distance measure DM (), resulting in the final distance measure. It is possible to avoid selecting the closest data points to ct 1 as the candidate for the next initial centroid by using this cumulating strategy. As a result, it has the potential to disperse the following initial centroids far away from the prior ones. The data points with the greatest distance measure (DM) will be chosen as in the next step. If does not appear to be an outlier, it will be used as the next initial centroid ct in the process. This ensures that all initial centroids are assigned as a result of the iterative procedure. As a result, all centroids may be positioned as far apart from one another as feasible within the data distribution [17].

2.2. Outlier Detection Mechanism

The candidate data point for nomination as the first centroids must not be labeled as an outlier to be considered [18]. It is possible to identify the outlier by counting the number of neighbor points that are located inside the neighborhood border.

Set the number of neighbors by using a probabilistic parameter to the average members of clusters .

Number of neighbors, , where is the number of data points and is the number of clusters.

Set the neighborhood boundary by applying a threshold to the highest distance in .

It is necessary to compute a distance measure between all data points and in order to have numerous data points as neighbors within the boundary in order to obtain several data points to be neighbors inside the border. It may, however, make use of the distance measure in the next iteration step. The outlier will be categorized as such if the number of neighbors inside the border is less than the minimum number (). A specific value will be produced based on the second-largest distance from the center if is confirmed to be an outlier. Identifying outliers is continued until the distribution’s first centroid, , is found.

2.3. Pillar -Means Grouping Algorithm

In this work, to optimize the -means grouping, I have suggested the pillar algorithm for identifying the initial centroids. In this method, the location of the initial centroids is determined by the estimation of the collected space among each node of the group and all the past centroids [19]. The member node with the farthest distance will be taken as the centroid node [20].

The suggested pillar -means is a grouping technique for grouping the network of nodes into a group of nodes. The concept of pillar algorithm is added into -means grouping for optimization [21]. The pillar -means is grouping the WSN into group of nodes and placed at a particular distance by the Euclidean distance to evaluate the centroid nodes from the group of nodes.

is the proportion that documents of class take in the whole training set.

is the probability of the certain feature word in the whole training set.

is the conditional probability that documents belongs to class when the document contains the word .

is the complementary probability of .

is the conditional complementary probability .

Assume, represent the set of nodes taken as the input, where represents the total number of the input nodes, represents the set of initial centroids taken initially from number of nodes, where represents the number of the groups, represents the cumulative distance measure, represents the space estimated for each iteration, represents the pooled (grand) mean of .

The suggested pillar -means grouping method creates the initial centroid nodes by the cumulative space among every node of the group and complete group centroids which has taken in the past. The largest distance of nodes is picked as the centroid node from the assessment. The sensor which has the largest space metric is picked as early centroid CC.

In wireless sensor networks to find out the initial centroids among the sensor nodes, the grand mean is computed for the full sensor nodes as the gravitational center of the sensor nodes [22].

Step : the distance measure (let be in this phase) is computed between the grand mean and each sensor node in the network. A sensor node which has the largest distance measure in step (sensor node which has the furthest distance from the grand mean of total sensor nodes) will be picked as the first contender of the initial centroid . If is not an outlier, it will be elevated to the first initial centroid CC1

Step : after picking the first candidate of the initial centroid, again recalculate the distance measure (let be in this step) between the newly chosen first candidate of the original centroid and each sensor node. The cumulative distance measure DM construction is begun from this stage. Assign the distance measure determined from step to DM. To choose a candidate for the second initial centroid, the identical procedure is done using DM instead of . The sensor node with the greatest distance measure of DM will be picked as the second initial centroid candidate . If is not categorized as an outlier, it will be elevated to the second initial centroid CC2 where is the total number of cases in partition (partition containing the cases in which attribute has value ), is the total number of cases in partition with class , and is the total number of attributes. The value of the phenomenon update parameter is determined by

It is shown in equation (7) that when the ACO algorithm is analysed, it is readily stuck in local optima. This analysis suggests which adjustments should be made or more specifically which elements should not exist in the shortest route. In order to overcome this issue, first develop a new heuristic function that updates the highest priority rules in the system.

By recalculating the distance (where t represents the current iteration step) between each sensor node and a previously picked first-order centroid candidate CCt, it is essential to choose the remaining initial centroid candidates from the pool of initial centroid candidates. In order to arrive at the final distance measure, it is necessary to first calculate the cumulative distance measure DM (). Using this cumulating strategy, it is feasible to avoid picking the sensor nodes that are closest to CCt 1 as the candidates for the next initial centroid, which would otherwise be the case. As a consequence, it has the ability to spread the subsequent initial centroids in a manner that is different to the previous ones. In the next stage, the sensor nodes with the largest distance measure (DM) will be selected as , and so on. It will be assigned as the next initial centroid CCt if does not seem to be an outlier in the data set. As a consequence of the iterative approach, all initial centroids are allocated as a result of the assignment of all initial centroids. Using this method, it will be able to locate all centroids that are as far apart as they possibly may be from one another [23].

It is possible to find the center of each group by allocating input into with space , and the most recent group, , may be generated using

The credentials for input are represented by , and the outlier is represented by DPm, which is used to elect the space with input and the space with DPm. So the most recent group is defined as follows:

The credentials for input are represented by , and the outlier is represented by DPm, which is used to elect the space with input and the space with DPm. So the most recent group is defined as follows:

The repeat of the group with a threshold value of is represented by in updating.

The Ant Colony Optimization technique is a metaheuristic optimization algorithm that is influenced by the natural context in which it is implemented. Using the ant colony’s method to searching for edible prey, this algorithm was created to help people discover edible food. This approach computationally mimics the communication between ants and ant colonies that happens spontaneously in the field between ants and their colony members [24].

Ant colonies are a kind of unusual insect that may be seen in the wild and are sometimes mistaken for bees. When this insect is at the juvenile stage of its life cycle, it is referred to as a doodlebug by the general public. When it reaches the ground, it creates funnel-shaped holes in the surface of the earth. The ant colony enters the hole from the lowest point of the pit and waits for its prey in a hidden spot at the bottom of the pit (beetles). It searches for and traps the bugs that are trapped inside the pit by tossing dirt in the direction of the pit’s outer walls as it moves through the pit. Eventually, it locates its prey in the bottom of the region and prepares the environment for the next food prey-hunting excursion [25]. In the past, ant colonies have participated in complex hunting operations of this kind [22, 23].

Figure 4 gives the description of an experimental procedure that may be utilized to optimize problems in a range of areas essential for obtaining the greatest possible outcome in the shortest amount of time [26].

In order to keep track of their current position on the ant hill, the ants employ the random walk method at each optimization step. Random ant walks occur in a search region with a limited perimeter when a large number of ants congregate (search space with restricted boundary). The position is updated in line with the current iteration number, which is specified in great detail in the corresponding documentation. While the optimization procedure is underway, two random movements that occur near the two ant colonies that were selected are used to update the position of the target [27]. It is undesirable that the inspection region is discovered due to the arbitrary selection of ant colony and unplanned movement of ants in the immediate neighborhood of the ant colony. As a result, when they are randomly walked, the fitness function assigns them a greater value than they otherwise would. With respect to each ant in each dimension, the evaluation of the random walk is done in a different way depending on the population [28]. To determine which ant colony position is optimal for the fitness evaluation, the position that performs the best is selected as the optimum position to be utilized for iteration. A comparison is made between the best position and the current posture, and it is assessed whether or not the best position is necessary [29]. A consequence of this is that the location’s search area is kept and may be used throughout the future iteration process. In order to recreate the pursuing collaboration that happens between the ant colony and the ants, the application was created [30]. It is necessary to enable ants to wander freely throughout the examination area while allowing ant colonies to chase after them using traps in order to establish this kind of interaction. Foraging ants are transported in a random manner around the search area in order to find food for their colony [31]. This unplanned movement of the ant within the examination area is controlled by using a random walk movement to keep the unplanned movement of the ant within the examination area from becoming too erratic.

3. Selection of Cluster Head Using ACO Algorithm

After the sensor nodes have been deployed in a WSN, the pillar -means method is used to divide the sensor network into a predetermined number of groups [32]. The Ant Colony Optimization method is used to pick the best head for each of the groups from among the members of each of the groups. In this algorithm, each and every sensor node is referred to as an ant or an ant colony, depending on the situation. Fitness (objective) function (FF) values are computed for each and every sensor node in a single iteration of the algorithm [20].

In each iteration, the sensor with the highest fitness function value is nominated as the ideal head of the group, and this process is repeated [33].

The following parameters are utilized to develop the fitness function, which is then used to determine the fitness value for each sensor node [34].

In addition to the node’s residual energy, the total number of neighboring sensors within the scope of a node, as well as the aggregate space between a sensor and its adjacent nodes, is also measured [35].

The space between the sensor and the sink is Euclidean (base station). The residual energy of the node is taken into consideration in order to choose a node with greater remaining energy as the group’s leader and to avoid selecting a node with less remaining energy as the group’s leader. The aggregate space between a sensor and its surrounding nodes is taken into consideration while selecting a head of the group that is closer to a large number of the member nodes. It allows the member nodes to save their battery power. The natural life of the web is increased as a result of this form of power preservation. Furthermore, it reduces the number of nodes that may be created individually. The Euclidean space between a sensor node and a central station is taken into consideration while selecting a leader of the group who will be located at a distance from the central station in order to preserve battery power [36].

Every sensor node is given the freedom to roam about the randomly selected group head in a random pattern. After the random movement of each member node that is supplied, a min–max normalisation is performed to the location of each member node in order to constrain their placement inside the search space. This is done in order to find the optimal solution. After that, an approximation of the value of the fitness function is created for each member node, using the updated location of the node as the basis for the approximation. After then, compare the newly predicted values of each member node’s fitness function with those of the leader of the group chosen at random [37]. If the value of the objective function of any one of the group’s member nodes is higher than the value of the fitness function of an arbitrarily chosen group head, then the objective function values of the two nodes will be switched. The value of the fitness function for the head of the random group is compared to the value of the fitness function for the head of the optimal group. If the value of the fitness function for the head of the random group is higher than the value of the fitness function for the head of the optimal group, then the head of the group that was arbitrarily chosen is selected to become the new optimal head of the group. The effectiveness of the group that each of the group’s heads was responsible for building may be described by the rate of objective function for that head’s group. It is decided that the member node of the web with the highest objective function rate will serve as the leader of the group, which consists of an extremely large number of members that have banded together to create the group. This makes it more likely that the leader of the group will accept you as a member of the organization. The member node of the group that has the largest objective function rate that is just below the next highest level is chosen to serve as the leader of the next group, and this process is repeated for each consecutive group. This particular activity is carried out on a never-ending loop until each and every member of the group has been accepted into the collective [38]. The last phase of the final repetition consists of selecting, via a random process, the elite leader of the group who has the greatest objective function rate to serve as head of the group. This selection takes place as the final phase. A new group is formed as a consequence of the recruitment of members from among the people who live in the neighborhood where the leader of the elite group resides. If the objective function rate of one of the members of the group is higher than the objective function rate of the arbitrary head of the group, then the objective function rates of the other members of the group are switched. There is a comparison made between the objective function rate of the arbitrarily chosen group head and its comparison with the elite group head. If the objective function rate of the arbitrarily chosen member is higher than its comparison with the elite group head, then that member is chosen to be the group’s new elite head. At each iteration, a calculation is performed to determine the objective function rate for each member node of the member nodes.

After that, the fitness function rate for each member of the web is determined for each individual node, using the most recent location of the node as the input. The next step is to compare the freshly calculated value of the fitness function for each member node with the value acquired from the randomly selected head of the group. If the values of the fitness function of any of the member nodes are higher than the values of the fitness function of the leader of the group arbitrarily picked by the system, then the values of the goal function are switched [39]. The value of the objective function for the randomly selected group head is compared to the value for the ideal group head. If the value of the objective function for the randomly chosen group head is higher than the value for the ideal group head, then the randomly chosen group head is selected as the new optimal group head.

During the course of the search, we make the presumption that there may be sensor nodes and group heads hidden somewhere inside the search area [40]. As was previously said, the location of each and every node is saved in the matrix designated as PN odes, while the position of each and every group head is recorded in the matrix designated as PCH. The position of each and every one of the PN odes is furthermore recorded and preserved in the matrix.

This result shows the arbitrary walk value that is the lowest achievable for the course of the th iteration.

4. Results and Discussion

The NS-2 is used to emulate the EAACOP protocol that has been proposed (network simulator). Over this network scenario, a total of 100 sensor nodes are distributed throughout a sqm region. The proposed EAACOP protocol is compared to two existing protocols, namely, EAODV (extended ad hoc on-demand distance vector) and VGDRA (very generalized distance vector routing) (virtual grid-based dynamic route adjustment). The parameters that were used in the simulation are reported in Table 1.

Based on the following QoS performance metrics, simulation results are evaluated: (1)Packet delivery ratio (PDR)(2)Packet loss ratio (PLR)(3)Network latency (NL)(4)Edge-to-edge delay (EED)(5)Network throughput (NT)(6)Energy consumption (EC)(7)Energy remaining (ER)

4.1. Packet Delivery Ratio (PDR)

The PDR is well defined as the fraction between the sums of messages accepted by the liquidator to the totality of messages forwarded by the sender.

Figure 5 gives the packet delivery ratio of the existing and proposed work.

Table 2 shows the results of the PDR analysis for the proposed protocol (EA-PATM) as well as for the current protocols (VGDRA and EAODV) in comparison to the existing protocols. It has been discovered that as the number of nodes in the web grows in comparison to prior protocols, the PDR for the recommended protocol increases in comparison to the previously constructed protocols, according to the results of the examination of the comparative results.

4.2. Packet Loss Ratio (PLR)

It is defined by the loss of packets while transmission and reception from source to destination. PLR is defined as the fraction of count of loss of forwarded messages to the total amount of messages communicated.

Figure 6 depicts a graph of the packet loss ratio for the proposed EAACOP protocol in comparison to other popular protocols such as VGDRA and EAODV. Table 3 compares the results of the packet loss ratio study for the proposed protocol (EA-PATM) with the results of the current protocols (VGDRA and EAODV) and presents the comparison. In the course of the evaluation, it was clearly noticed that the suggested protocol, when compared to the current approaches, results in a lower packet loss ratio for all numbers of nodes.

4.3. Network Latency (NL)

The network latency is defined as the total period needed to communicate a data packet between one origin node to another end node across the network. The NL is like a round trip time.

Network latency for the proposed EAACO protocol is plotted against that of already known protocols such as VGDRA and EAODV in Figure 7, which is a comparison of latency between the two protocols. The results of network delay for varied numbers of sensor nodes are shown in Table 4. The results of the analysis demonstrate that the suggested EAACOP protocol has the lowest latency, with a time of 53 seconds for a total of 100 nodes. Existing techniques such as VGDRA and EAODV have 55 seconds and 65 seconds, respectively, in their execution time. We found that, when compared to other protocols, the EAACOP protocol had the lowest latency.

4.4. Edge-to-Edge Delay (EED)

The EED is defined as the total number of hours necessary to transport a packet from one of the beginning nodes to the other of the finishing nodes through a network’s topology of nodes. On the right-hand side of Figure 8, you can see the edge-to-edge latency comparison of the already accessible protocols and the newly designed protocol. Table 5 shows the results of end-to-end delay measurements for the proposed protocol and current protocols in a simulated environment with varied groups of nodes. When compared to the current protocols, the suggested approach has a lower EED value.

4.5. Network Throughput (NT)

The throughput of a network is defined as the total number of positive messages that are conveyed through the network in one unit of time per unit of time interval. The NT is responsible for analysing the message sent in the interim.

With various categories of nodes in the simulated environment, the network throughput is compared to current protocols (VGDRA, EAODV) and a suggested protocol (EA-PATM), as well as with other proposed protocols. The result of the comparison is shown in Figure 9. Table 6 contains an analysis of the results. The throughput of the network increases as the number of nodes on the network grows. Using a hundred nodes, the findings demonstrate that the newly designed EAACOP protocol has a greater throughput of 60 packets/second, but the old protocols VGDRA and EAODV have lower throughputs of 44 and 38 packets/second, respectively, for 100 nodes.

4.6. Energy Consumption (EC)

In a wireless sensor network, the consumption of battery power (energy) by the node is the important component that influences the network’s efficiency and data transmission. Node interference in data transmission is caused by a lack of efficiency in data transfer. is the amount of energy that is used by the system. Assume that l2 represents the loss of battery power and that represents the distance between the nodes of the group. The data is transported from the transmission amplifier to the forwarding node at the rate of , where is the amount of power consumed by the transmission amplifier to communicate the data with the forwarding node. The probability model employs radio signal transmission for data transfer; let represent the energy loss experienced during the transmission of -bit data via the radio channel. Then, using the following equation, the energy is computed. or the transmission of given message the radio signal expands

Forwarding and receiving a message is not cheaper in these parameters. Thus, the total energy consumption can be defined in

The analysis and comparison of the energy usage throughout the routing process from 20 nodes to 100 nodes is shown in Figure 10, as was the comparison of the results with current methodologies, which was displayed in the part before this one. Table 7 contains the findings for each protocol applied to a variety of different node configurations, with the most current findings shown at the beginning of the table. An examination of the new protocol in comparison to the existing protocol revealed that the new protocol leads to a decrease in the amount of energy that is used. According to the data, the recommended EAACOP protocol consumes the least amount of energy, 58 J/s, when the simulation includes 100 nodes. On the other hand, the current protocols VGDRA and EAODV use a great deal more energy than the previous ones do, using 88 J/s and 70 J/s, respectively, for a total of 100 nodes in a simulation.

4.7. Energy Remaining (ER)

Sensor nodes are a significant source of energy consumption in wireless sensor networks, since they are responsible for the vast majority of the energy consumption in terms of data transmission, data processing network, and application usage in the networks, among other things. After processing a network application, the energy left in nodes is monitored and examined in order to increase the network efficiency and the capabilities of nodes, which are both crucial components of overall network performance.

In this study, the total remaining energy of the sensor nodes is computed and compared using the suggested protocol (EA-PATM) and existing methods, respectively (VGDRA and EAODV). For your convenience, the comparison graph for different groups of nodes is displayed in Figure 11 for each set of nodes. When comparing the proposed EAACOP to the present protocols, the findings demonstrate that the proposed EAACOP has a bigger residual energy than the current protocols. Table 8 displays the outcomes of the comparisons that were carried out.

5. Conclusion and Discussion

When the proposed EAACOP protocol is utilized in combination with it, the wireless sensor web is partitioned into a few different groups by making use of a -means grouping strategy, which is discussed in further depth further down in this section. Because of the development of this group, the sensor node and the group head are now situated in a location that is closer to one another (intragroup communication cost). After the development of groups, the Ant Colony Optimization (ACO) algorithm is used to pick the individuals inside the groups who will become the best possible leaders. These individuals have already been chosen. Because of this strategy, both the difficulty and the expense associated with the development of individual nodes and with communication between different groups are significantly decreased. It also contributes to improving the overall quality of the service that is provided. Each node on the network uses a protocol known as time division multiple access (time division multiple access-based medium access control) in order to implement time division multiple access in order to cut down on the amount of collisions that occur and preserve battery power (energy). The proposed EAACOP protocol is simulated in NS-2 and then executed, and the results of these simulations and executions are compared to those of already established protocols like VGDRA and EAODV. In particular, it is contrasted with the protocol that is already in use in terms of the packet delivery ratio (PDR), the packet loss ratio (PLR), the network latency (NL), the edge-to-edge delay (EED), the network throughput (NT), the energy consumption (EC), and the energy remaining (ECR) (ER). Following additional research, it was determined that the proposed EAACOP protocol has a higher packet delivery ratio, a better throughput rate, and a smaller quantity of residual energy than the existing methodologies for nodes that belong to a variety of different categories. This was the conclusion reached after comparing the performance of the proposed protocol to those of the existing methodologies. When comparing the newly proposed EAACOP protocol to other methodologies for a variety of node counts, similar results have been found in terms of packet loss ratios, network latencies, end-to-end latency, and energy usage for the newly proposed protocol. This is because the newly proposed protocol uses less energy. The following graph provides a graphical representation of the findings. The recommended EAACOP protocol surpasses all of the current VGDRA and EAODV algorithms by a significant margin, as shown by the findings of an experimental investigation. By using the EAACOP protocol, it is possible to provide a trustworthy method for increasing energy consumption in networks while simultaneously improving the quality of service in WSNs. This is something that is now within the realm of possibility.

Data Availability

The data that support the findings of this study are available on request from the corresponding author.

Conflicts of Interest

All authors declared that they do not have any conflict of interest.