Abstract
Congestion in wireless sensor networks (WSNs) is an unavoidable issue in today’s scenario, where data traffic increased to its aggregated capacity of the channel. The consequence of this turns in to overflowing of the buffer at each receiving sensor nodes which ultimately drops the packets, reduces the packet delivery ratio, and degrades throughput of the network, since retransmission of every unacknowledged packet is not an optimized solution in terms of energy for resourcerestricted sensor nodes. Routing is one of the most preferred approaches for minimizing the energy consumption of nodes and enhancing the throughput in WSNs, since the routing problem has been proved to be an NPhard and it has been realized that a heuristicbased approach provides better performance than their traditional counterparts. To tackle all the mentioned issues, this paper proposes an efficient congestion avoidance approach using Huffman coding algorithm and ant colony optimization (ECAHA) to improve the network performance. This approach is a combination of trafficoriented and resourceoriented optimization. Specially, ant colony optimization has been employed to find multiple congestionfree alternate paths. The forward ant constructs multiple congestionfree paths from source to sink node, and backward ant ensures about the successful creation of paths moving from sink to source node, considering energy of the link, packet loss rate, and congestion level. Huffman coding considers the packet loss rate on different alternate paths discovered by ant colony optimization for selection of an optimal path. Finally, the simulation result presents that the proposed approach outperforms the state of the art approaches in terms of average energy consumption, delay, and throughput and packet delivery ratio.
1. Introduction
The advancement in lowcost, small, and tiny sensor nodes makes a significant role where the sensor nodes have very attractive characteristics of sensing the environmental conditions and process the received signals. The sensor nodes can be deployed in all accessible and inaccessible areas for sensing the data across various applications like battlefield, building inspection, target field imaging greenhouse, and monitoring disaster area [1]. The deployment of sensor nodes is applicationdependent, so it can be random or deterministic [2]. The sensor nodes of the wireless sensor network (WSN) sense the data or event, gather the data under defined infrastructure, and process the received signals. After a lot of advancement in wireless sensor networks, it is lacking with few specifications like limited memory, inadequate computation, limited bandwidth, and batterypowered nodes [3, 4]. Due to the short communication range of sensor nodes, the intermediate nodes collaborate in forwarding the data packets. There are various applications of WSN where sensor nodes are deployed in an infrastructureless network. The sensor nodes sense an event and report to the nearest base station for respective action. To obtain the quality of service such as congestionfree endtoend data delivery and delayfree data transmission, an effective and efficient network is required to design to tackle congestion and energy issues of WSN [5, 6]. When a node receives data more than its capacity to process, congestion may occur which leads to the retransmission of unacknowledged packets. But this frequent retransmission of packets may deplete the energy of the nodes. The nodes in WSN are batterypowered, so energy efficient decisions are desired solution to maximize the lifetime of the network [7]. Load balancing, duty cycling, and data aggregation are the various traditional approaches that have been studied and implemented on WSN. But due to the rapid increase in sensor nodes, the traditional approach faces many issues. Nowadays, trajectorybased data forwarding, mobile sink, and energy supply based on node selection approaches are used for residual energy consumption [8, 9].
WSN is bounded with limited transmission and processing capabilities; therefore, the energy of sensor nodes and successful data transmission are the two important and mandatory requirements. There are four popular applications of WSN such as eventoriented, queryoriented, continuous, and hybrid applications. The application performs on few predefined situations, resulting in an unforeseeable traffic rate. The applications where one sensor node transmits a query and another sensor node has to respond to it are known as queryoriented applications. On the other side, continuous applications perform periodically or in few time slots. Lastly, the hybrid applications are the fusion of the above three applications [6]. The transport layer deals with the congestion problem, which is accountable for endtoend network connection. The objective of the transport layer is to provide fair bandwidth allocation, control the data flow rate with reliable connection links, and retransmit lost packets in an energyefficient fashion. Few existing strategies for controlling congestion are a slow start, congestion avoidance, fast retransmission, and fast resumption, but these are not able to outperform in today’s network scenario [10]. Therefore, an effective and efficient CC mechanism under few constraints, such as resource, performance, and scalability, is in high demand. Congestion is noticed on two locations, namely, node or link. When the node’s buffer is fully occupied owing to the fast arrival rate of incoming packets, nodelevel congestion happens. When data packets collapse on network/radio links owing to the fast transmission of data packets, linklevel congestion is noticed. The performance is influenced by both types of congestion. Congestion should be detected at its early stage, then be notified to the nearby nodes to the source node, and eventually, control it [11]. Traffic control and resource control are the two most popular approaches for this domain. When the data flow rate is controlled, traffic controlled approach is considered, while the resource control approach looks after the fair allocation of resources to divert the data on a less congested path [12]. Both the approaches contemplate the node’s priority, which plays the foremost role in dealing with congestion. To address the abovementioned challenge, we present an energyefficient, robust, and heuristicbased approach to boost the lifetime of the network, provided congestionfree. Therefore, major research contributions are as follows: (1)An efficient congestion avoidance approach using Huffman coding enabled ant colony optimization (ECAHA) in wireless sensor networks is proposed considering the important constraints of WSN such as energy, packet loss rate, and congestion level(2)Ant colony optimization has been applied to search congestionfree alternate paths whenever there is a hike in congestion on the current path. The forward ant constructs multiple congestionfree paths from source to sink node, and backward ant ensures about the successful creation of paths moving from sink to source node, considering energy of the link, packet loss rate, and congestion level(3)Packet loss probability is computed for each path using maxima entropy principle. Packet entropy helps in evaluating the uncertainty of congestion degrees on alternate paths considering packet loss rate(4)Huffman coding is used to select an optimal path to direct the load. The optimal path is the best path among multiple paths identified by ACO in terms of energy, congestion level, and packet loss rate. Huffman coding considers the packet loss probability on different alternate paths discovered by ant colony optimization for selection of an optimal path(5)Finally, performance of the proposed (ECAHA) approach is compared with the state of the art approaches
The rest of the paper is organized as follows: in Section 2, literature review related to congestion avoidance and control is discussed. The efficient congestion avoidance approach is elaborated in Section 3 that includes problem statement and congestion indicator model followed with Huffman coding enabled ant colony optimization concepts. The simulation and results of proposed mechanism (ECAHA) are explained in Section 4. Section 5 elaborates the conclusion of the proposed mechanism.
2. Literature Review
This section explains the literature survey of different research works concerned with congestion avoidance and congestion control algorithm.
2.1. NonnatureInspired Techniques
This proposed work is aimed at avoiding congestionlike situations in WSN which is a very challenging issue. It directly affects the throughput of the network. It broadly occurs when the sensor nodes in WSN send data to a link more than its capacity to process. So, when these nodes accept data packets with a higher rate than their capacity, then the data packets may get dropped in between the path or face delay. If the packets acknowledged are not received, the packet is considered as dropped or lost. Such an unacknowledged packet requires retransmission. But this retransmission of lost/drop packet has a limit. Retransmission of the lost packet again and again may degrade the network performance. So, to tackle such situations, various algorithms have already been proposed. These algorithms are not suitable for all WSN applications. CODA (congestion detection and avoidance) algorithm is proposed by Wan et al. The mechanism works on three broad steps: congestion detection, congestion notification, and congestion control. Congestion detection is done using buffer occupancy and channel status. This is a mechanism to detect a faulty situation before their occurrence. After detecting congestion, a notification can be done by the backpressure method. It applies hop by hop backpressure notification. This notification can be implicit or explicit. Both notifications have its advantages and disadvantage. To notify, it sends a beacon message implicitly or explicitly to the backward node, which increases overhead on an already congested link. For controlling purpose, AIMD (additive increase multiplicative decrease) is used, which additively increases and decreases the data rate [13]. CCF (congestion control and fairness) is proposed by Brahma et al. for many to one routing to control congestion and a transparent endtoend packet delivery. Fairness is evaluated based on the ratio of the number of packets transmitted and the number of packets received [14]. Kasyap and Kumar proposed Trickle which is a selfregulating algorithm for code propagation and maintenance. It broadcast updated data information to disseminate any sort of data [15]. PCCP (prioritybased congestion control) algorithm is proposed by Wang et al., which prioritizes the nodes for critical situation. This method uses a parameter congestion degree by evaluating the interarrival time and service time of packets. In any critical situation, prioritized packet may process first then other packets, which is aimed at maintaining the overall performance of the network [16]. CAF (congestion avoidance and fairness) proposed by Ahmad and Turgut calculates the characteristic ratio of the number of downstream nodes and upstream nodes. It monitors buffer occupancy of downstream nodes to avoid congestion. It fairly balances the load by monitoring buffer occupancy levels [17]. DAlPaS (dynamic alternate path selection algorithm) proposed by Sergiou and Vassiliou measures congestion by the increased capacity of the link and opts alternate congestionfree path to flow the data rate. Various optimization techniques were also introduced for congestion avoidance and control [18].
2.2. NatureInspired Techniques
All the above approaches lead to excessive communication load and battery power depletion. These traditional approaches are not suitable for complex problems and do not provide the optimal solution. These drawbacks cannot be recovered easily in the communication networks. These are not favorable for a dynamic and unpredictable environment of WSN applications. So, it is a need of time to introduce selfadaptive approach. Selfadaptation is the strength of natureinspired techniques. Natureinspired techniques are very attractive to compute issues that rose in WSN. Natureinspired techniques are performing the complex tasks in a seemly manner with limiting resources and capability. These techniques take advantage of various disciplines for computing complex tasks with significant features of emerging fields. These algorithms are developed by drawing inspiration from nature, social, and local behavior leading to emergent global behavior of reducing congestion. These take knowledge from various branches of science like chemistry, mathematics, physics, biology, and engineering which helps in developing computation tools for complex problems. Natureinspired approaches have opted when a problem is nonlinear and complex, with the huge number of potential solutions and objectives.
Based on produced solutions, optimization problems are divided into two categories such as deterministic and nondeterministic (stochastic) algorithms. Deterministic algorithms are conventional and classical algorithms that are based on mathematical programming like linear or nonlinear programming, quadratic programming, gradientbased, or gradientfree methods. Nondeterministic algorithms exhibit some randomness and produce variant results for different problems. Stochastic algorithms explore different search spaces to obtain global optimum escaping local optima. It is more capable of handling NPhard problems. Optimization techniques like ACS (adaptive cuckoo search), bird flock behavior, artificial honey bee, fireflies and particle swarm intelligence, honey pot, and gravitational search algorithm, and ant colony optimization are some natureinspired approaches. These all natureinspired optimization techniques help in providing the optimum solution to control congestion to maintain performance and throughput of WSN. The adaptive cuckoo search uses cuckoo behavior for an optimal rate adjustment to minimize congestion. It adjusts the share rate within the limits of the packet service rate. It uses a fitness function for share rate, provided the congestion of minimum drop at the congested link. This method maximizes the system performance [19–21]. FlockCC is based on bird behavior proposed by Antoniou et al. This approach adopts a swarm intelligence paradigm where packets are guided to form a flock and flow towards the sink like a global attractor. It dynamically balances the loads within available network resources. The method provides graceful performance in terms of packet delivery ratio, packet loss rate, and delay. This approach is very scalable and can be adopted in the complex networks [22]. REFIACC proposed by Kafi et al. is reliable, efficient, fair, and interferenceaware congestion control protocol which schedules the communication to prevent interference in inter and intrapath hot spots. Maximum utilization of available bandwidth is done which outperforms the overall performance [23]. PSO (particle swarm intelligence) is inspired by fish schooling and bird swarm intelligence. Each particle flows in some direction by learning from its own experience or experience of its companion. All individual particle moves cooperatively and competitively in search space synchronously to obtain the best solution [24, 25]. Every particle focuses to move on the best neighbor node on the path. The particle moves from one location or node to another best possible node with a regulating velocity. It discovers p best and g best solution. Finally, fitness function value for each node is obtained. Further, GSA is used in collaboration with PSO, where best performer nodes are attracted towards sink nodes. Heuristic and metaheuristic approaches are mainly two derivativefree stochastic optimization algorithms. Metaheuristic approaches provide better results for complex problems as compared to the heuristic approach. So, the work goes towards a metaheuristic approach because it provides an optimal solution after considering various concerned parameters. It discovers results using the trial and error phenomenon. It includes a group of search agents that explore the most feasible output depending on randomization and few specific rules. These rules are natureinspired [26, 27]. Lee and Teng proposed an improved version for LEACH clustering protocol to reduce packet loss rate as well as to prolong the network lifetime using fuzzy inference systems. The approach is named as enhanced hierarchical clustering approach. The nodes with higher residual energy, slower moving rate, and longer pause time would be selected as cluster head [28]. To conserve energy and packet loss in WSN, El Alami and Nagid work on routing techniques considering the mobility of sink as the significant challenge for packet loss rate. The technique named routingGi proves itself as more efficient than the existing techniques in terms of lifetime of network, energy efficiency, and packet delivery rate [29]. El Alami and Nagid further adopt an enhanced clustering hierarchy approach to maximize the network lifetime, where the nodes with higher energy will gather data to transmit it to the base station. This approach resolves the issue of redundant data collection by nodes also. It works in a sleepingwaking mechanism that conserves the energy consumption. This approach is utilized in homogeneous and heterogeneous networks [30]. Sangeetha et al. introduced a heuristic approach for searching an energy efficient optimal path in WSN. The author focused on adjusting node degree and topology periodically to save battery power. After this, the data flow is balanced using fuzzy logic to avoid congestion. If congestion has occurred, it finds an alternate best path using learning realtime A star heuristic algorithm [31]. Logambigai et al. continue to work on heuristic approaches, and another energy efficient gridbased clustering approach was introduced with intelligent fuzzy rules. In this method, the routing is performed using grid coordinator with fuzzy rules considering minimum intermediate nodes in routing process. This approach is good in terms of lifetime of network and energy [32]. One more improved form of congestion aware routing mechanism using fuzzy rule sets is proposed by Sangeetha et al. This is a traffic prudent method which identifies more reliable path and handles excessive traffic using fuzzy rule prediction. It works in two segments; one is to identify path through positioning nonlocalized nodes, and another is to identify routes to mitigate congestion free path. The congestion estimate is done through ECFM algorithm [33]. One more clustered gravitational and fuzzybased energy efficient approach was proposed by Selvi et al. to address the limitations and challenges of existing routing system. It utilizes a heuristic gravitational clustering approach to provide an optimal solution for effective routing and efficient clustering. The most appropriate route with cluster head nodes is chosen after applying fuzzy rule sets. The approach increases the lifetime of the network and reduces the consumption of energy [34]. ACO is a natureinspired metaheuristic mechanism carries few inherent features that manifest excellent scaling characteristics. Metaheuristic techniques outperform to mitigate congestion and significant for largescale networks as well. The objective of ACO is to discover congestionfree alternate paths. ACO prospects alternate paths by generating and forwarding artificial ants on search space. Like real ants’ budge on search space in search of food, they budge one after another perceiving the existence of pheromone. The probabilistic movement from one node to another allows the ants to prospect new and safe paths. The density of pheromone attracts other ants to budge on the best possible path. Also, heuristic values of different paths incorporate a vital role in adopting the most appropriate path [35–39]. The heuristic value determines local information about the path which is cooperating in evaluating congestion degree. The data rate on different routes is uncertain or random. So information entropy or packet entropy is measured here along with buffer entropy, to calculate the packet loss rate. The packet loss rate helps measure congestion degree and move towards the best solution. The entropy function is used to evaluate the probability of congested paths via packet loss probability on different paths. This indicates the uncertainty of the flow of data, which can be the cause of congestion. Entropy is a concept used to measure the disorder, uncertainty, or randomness of a system. Information theory introduced entropy. The concept of entropy was first introduced for statistical thermodynamics. But now, its applications are communication networks, biological research, and many more. In communication networks, it is widely used to measure the abnormality degree of an event by monitoring abnormal events which helps improve performance metrics of the network like throughput and energy efficiency. Entropy is mainly studied in three categories such as Shannon entropy, Renyi entropy, and Tsallis entropy. Most of the studies discussed only Shannon and Tsallis entropies. Shannon entropy is discovered by C. E. Shannon, and it is extensive, while Tsallis entropy is nonextensive. Shannon entropy is a quantitative measure of uncertainty in a data set. Tsallis entropy explores problems with multifractal structure for longrange dependence. It focuses on the effectiveness of entropy in controlling congestion. Shannon’s work is extended by Jaynes named maximum entropy principle which has the inherent property to optimize entropy measure when incomplete information is provided in moment constraint form [40–44]. Shannon expressions and their relevance in the proposed technique are well described in the congestion indicator model section. Table 1 includes the notations used in this paper.
3. Proposed Efficient Congestion Avoidance Approach:(ECAHA)
In this section, the problem statement and congestion indicator model are presented to elaborate the congestion issue and behavior of buffer occupancy. Alternate congestionfree path selection using ACO and Huffman coding is proposed.
3.1. Problem Statement
For monitoring purposes, sensor nodes are deployed in a network size . The WSN is represented by a connecting graph where a set of vertices and a set of connecting edges between sensor nodes. node is considered as sink node and has an adequate energy resource as shown in Figure 1. All the nodes are static once deployed in the network. The initial energy and communication range of all the nodes are equal. The nodes within each other’s communication range are assumed to be adjacent nodes. is the initial energy of all sensor nodes where . is the average residual energy. , is the delay, and is the hop count. When a source node transmits data packets to (sink node), the energy level is decreased in each transmission and receiving of the data packet. and are the energy consumed in transmitting and receiving bits of information from to.
and are the energy consumed in transmitting and receiving one bit. is the amplifying energy, and is the energy dissipation per bit in transmitting and receiving. Distance between and . is . Total energy consumed on the link is the sum of energy consumed in transmitting and receiving.
This can be simplified as energy consumed in transmission and receiving of bit data, multiplied by several intermediate nodes including the source node. Suppose as source node and are intermediate nodes before reaching . The node to be intermediate nodes between and.
We define link lifetime () using the minimum residual energy as
When one node is depleted its all energy, it will not participate in future network communication. Using equation (5), lifetime of complete network can be expressed as
Equation (6) can be expressed as
We assume that the nodes in WSN are deployed in Gaussian distribution fashion. Gaussian distribution for a random variable can be defined as
Mean or expected value of the data packet is denoted by , and standard deviation is . It is experienced that if sensor nodes are distributed in Gaussian fashion, the probability of congestion detection is higher as compared to any other distribution strategy. is the data packets, travelled between and . When arrival rate of source node exceed the service rate of sink node, the data packets are required to wait in the buffer of the sink node. In WSN, each sensor node has a limited buffer space in which waiting data packets can be hold till their service time. When the data packets exceed the limit of buffer, congestion occurs and the data packets may get dropped. This indicates the congestion on the path. Data arrival rate is the average time elapsed from arrival to successfully process from queue. Let be the arrival rate and be the service rate or departure rate of . The following conditions should be met to declare the network congestionfree or congested.
Interarrival time of data is , and data departure rate is. Also, congestion can be predicted by calculating message generated per unit time () from a particular node plus message arrival rate. If this is greater than message departure rate of that particular node then it is also a clear indication of congestion in near future. It can be simplified as
This needs retransmission of dropped data packets. Frequent retransmission of may lead to queue delay, more energy consumption, and increase congestion. Congestion deteriorates network lifetime and quality of service. The queue occupancy level can be the indicator of congestion. In worst case, if congestion on path occurs, it must be controlled. So to control congestion, the flow of data is redirected to an alternate congestionfree path. In such case, metaheuristic approach is expected to be adopted to redirect the flow on an alternate congestionfree path. Metaheuristic technique promises fast, effective, and efficient relief from congestion. The proposed approach ECAHA is aimed at controlling congestion by identifying optimal alternate route for data flow and by considering different parameters.
3.2. Congestion Indicator Model
This section elaborates the behavior of buffer occupancy that eventually affects congestion level followed with notification regarding congestion on path. Virtual buffer occupancy () by incoming packets and is the actual maximum buffer size of a node. If level exceed to or buffer occupancy of incoming packet is 95% or more, then it is an indication of congestion. 5% buffer space is left for the packets on fly to prevent them from drop. Here, is the upper threshold of the buffer. Once the congestion indication is favorable, it is mandatory to evaluate congestion level (a)Buffer occupancy status and are buffer occupancy maximum and virtual buffer occupancy. If where , congestion occurs.(b)Level of congestion If Else Congestion on intermediate node can be computed as is the level of congestion at node . are nodes of network. The bandwidth of network also decreases with increasing level of congestion. As the incoming packets increases to its level, congestion level also increases.(c)Notification for Congestion. When a congestion indicator provides evidence for congestion occurrence, it must implicitly or explicitly notify backward nodes. So to reduce extra overhead and an implicit notification, beacon packet is sent to the source node.(d)Congestion Control. To control drop rate, the data flow must be redirected to some alternate route. This alternate route must be congestionfree. So to choose alternate congestionfree route, ant colony optimization technique is used. ACO helps in identifying multiple alternate congestionfree paths. Huffman coding helps in selecting an optimal path from multiple alternate paths. The proposed approach is preemptive which monitors the occurrence of an event before its occurrence and avoids its initiation.
3.3. Alternate CongestionFree Path Selection
This section explains the steps involved to obtain multiple congestionfree alternate paths. Each sensor node maintains a routing table that contains the following information as source ID, sink ID, FANT ID (forward ant), distance between source and sink in terms of number of intermediate hops, residual energy of node, and packet loss rate and congestion level of node and pheromone value. Ant colony optimization method constructs multiple alternate paths between sources to sink. FANT (forward ant) and BANT (backward ant) are two types of ants used in constructing paths. FANT moves from source to sink and gathers information in the forward direction. It will initialize a pheromone value in this direction. Then, the pheromone value is updated in the reverse direction from the sink to the source. BANT ensures source node regarding energy consumption, packet loss rate, congestion level, and hop counts. In this way, multiple alternate congestionfree paths are constructed by FANT and confirm about a successful path creation by BANT to the source node.
3.3.1. Forward Ant (FANT)
The number of FANT is generated to construct congestionfree multiple alternate paths between the sources to the sink shown in Algorithm 1. When ant moves from one location to another, it drops pheromone on that path. The density of pheromone is higher on a most favorable path. FANT carries information like source ID, sink ID, FANT ID, hop count (), energy consumption , congestion level , minimum energy , and time to live TTL. Every node in network maintains pheromone table and routing table.

In FANT, is the probability of selecting next hop while moving from the source to the sink. Therefore, , and all belongs to nodes of the network.
is the pheromone density of link , and is the heuristic value of link or the attractive coefficient at time “.”. is the memory to store information about path (visited nodes) which is used to compute path length in terms of hop count. Here, and are controlling parameters for pheromone and heuristic values. The pheromone value is updated with updating visiting nodes and expressed in equation (15).
is the evaporation factor, and it is decremented with time. and are loads on nodes and , and “” is the coefficient. stores load information from source to destination. Here, denotes ratio of load to distance where is the distance between to , and and are weights or controlling parameters. is the local heuristic value for different paths which incorporates in measuring congestion degree and works as an important parameter to calculate packet loss probability on different paths.
3.3.2. Estimation of Packet Loss Probability
This section explains the applicability of the packet loss probability in the congestion avoidance that is calculated from packet entropy.
As the data rate on different routes is uncertain which can be the cause of congestion in WSN, the packet loss rate is also uncertain here. The uncertainty of information is measured through entropy which was introduced under information theory concept. The information theory was first introduced for statistical thermodynamics. In communication networks, entropy is widely used to measure the abnormality degree of an event which ultimately helps to improve performance metrics of the network such as throughput and energy efficiency. Here, Shannon entropy [40] is used which is a quantitative measure of uncertainty in a data set. Shannon’s work is extended by Jaynes named maximum entropy principle which has the inherent property to optimize entropy measure when incomplete information is provided in moment constraint form.
Every node in a WSN is designed with a finite buffer space to store the incoming data packets. To evaluate the packet loss rate, the buffer entropy is calculated. The maximum entropy principle is used when there is absence of information regarding mean arrival and service rates. This yields Lagrange’s loss formula for providing the express for the state probability distribution of loss system. This maximum entropy principle framework is used to study queue behavior for packet loss rate which considers arrival rate and service rate under normalization constraints, moment constraint, and utilization constraint.
While data travelled from one node to another, the value of entropy changes and rate of data arrival in buffer is also different which changes buffer entropy too. The heuristic data observes data packet arrival rate in buffer. Assume a single server queue with finite buffer of size. Probability distribution of arrival rate of packets is or it is a distribution of the queue with size. Entropy is defined here as the average of information that is received. Therefore, information quantity (IQ) of the data packets is shown in equation (20).
Shannon entropy [40] is used here, that is, a quantitative measure of uncertainty. The uncertainty arises in probabilistic as well as in deterministic phenomenon where the outcome is about possibility of some specific outcome. Maximum entropy finds the maximum packet loss probability which helps in selecting an optimum path. The path having minimum packet loss probability will be chosen as an optimal path. Packet entropy is average of information quantity. is a function defined by Shannon to measure uncertainty where is finite number of data packets transmitted between sources to sink [41].
The cases are as follows: (1)0, if the probability of arrival packets is either 0 or 1. This is also known as minimum entropy(2), if the probability of arrival packets is ½. This is also known as maximum entropy
Moment constraints to deduce maximum uncertainty for finite buffer queuing system are defined in equation (22).
Utilization defines queue utilization function for nonempty queue is defined in equation (23)
The queuing system also includes the empty system as problem of state space, where is a probability of zero job in system and is a function defined as 0 when is equal to 0 and 1. When is not equal to 0 where natural probability constraint or normalization constraint is shown in equation (24),
Maximizing Shannon entropy is subject to moment constraint in equations (22), (23), and (24). Lagrange multiplier is used to determine maxima or minima of a function. It is a weighted sum of objective and constraint function. , , and are Lagrange multiplier associated with moment constraint, utilization, and standard probability constraint.
Differentiate the Lagrange function with respect to , and maximize Lagrange function, that is, as given in [43, 44], and the value of is
Now, by substituting the value of in differential constraint functions (22) and (23),
Now, for , solve equation (30) to obtain value of as shown in equation (31).
The packet loss probability [43] for fully occupied finite buffer using maximum entropy probability distribution of system size is computed as
Here, is Riemann zeta function widely used in number theory when distribution is related to prime numbers. As is an infinite series shown as in equation (33),
3.3.3. Backward Ant (BANT)
The BANT follows backward path from sink to source node identified by FANT. The pheromone value is updated with every move in communication link shown in Algorithm 2. It also contains source ID, sink ID, BANT ID, energy consumption, hop counts, and congestion level. is updated pheromone value of link. The computation function of path for packet loss rate, total link energy consumption, hop counts of current path, and congestion level of link are , , , and , respectively, along with , , , and weight parameters. Here, and are used as controlling parameters to monitor pheromone updating. The ant moves to the next node in backward direction and updates the pheromone value. It also recalculates and updates the mentioned parameters, in order to make it capable for optimal decision.

3.4. Select Optimal Path Using Huffman Coding
After ACO identified multiple alternate congestionfree path, Huffman coding selects optimal path. For this, a tree of nodes is generated with nodes having low congestion level near to root node and high congestion level near to leaf node. The root node act like sink node. The objective is to choose a path where congestion is less or no congestion near sink node. Such path is considered as an optimal one. Well, all the parameters play important role in selecting optimal path. But congestion is our major concern, so Huffman coding opts an optimal path having minimum congestion level.
Multiple paths can be represented by , and is the path length with a weight. So,
Congestion leveloriented optimal path evaluation function can be represented as
Here, is optimal path chooses after Huffman coding. is the packet loss rate and is the minimum congestion on link. and are the controlling parameters for hop count and congestion level.
3.5. Huffman Coding for Congestion Control
This section defines the working of Huffman coding for selection of optimal path among multiple alternate paths identified by ACO. It focuses on packet loss rate on different paths while selecting optimal path. The working of Huffman coding (HC) is elaborated in Figure 2, where a, b, c, d, e, and f are the different nodes of a network with different packet loss rate 6, 10, 13, 14, 17, and 46 as shown in Figure 2(a). Huffman tree is also constructed to make better understanding of how to opt an optimal path in WSN during congestion. This Huffman coding approach is aimed at arranging the nodes in an optimal way of their usage. Create a min heap of six nodes, and each node represents root of a tree with single node. Extract two nodes with minimum packet loss rate from min heap and add an internal node with packet loss rate , shown in Figure 2(a). Now min heap has five nodes, where four nodes are roots of tree with three elements shown in Figure 2(b). Extract two nodes with minimum packet loss rate from heap. Add a new internal node with packet loss rate . Now, min heap has four nodes, where two nodes are roots of tree with single element each and two heap nodes are roots of tree with more than one node shown in Figure 2(c). Extract two minimum nodes with packet loss rate again. Add internal node with packet loss rate . Now, min heap has three nodes. Extract two nodes with minimum packet loss rate. Add a new internal node with packet loss rate shown in Figure 2(d). Now, min heap has two nodes. Extract two nodes with minimum packet loss rate. Add a new internal node with packet loss rate shown in Figure 2(e). Now, min heap has only one node. Since the heap has only one node, the algorithm ends here. Start traversing tree from root node. For all left child, write 0, and for all right child, write 1.
(a)
(b)
(c)
(d)
(e)
(f)
3.6. ECAHA Flow Chart
The complete operations of proposed technique are presented by a flow chart in Figure 3. The three major portions of this flow chart clearly define the operations of proposed technique. During communication, if is greater than then check the congestion level by computing buffer occupancy level, and if virtual buffer occupancy is greater than maximum buffer limit then it is a link layer congestion and it is a time to reroute the data on an alternate route; otherwise, the network is safe and communication goes smooth. Then, ACO operation performs here, and ACO identifies multiple alternate congestionfree paths. It identifies packet loss probability by calculating packet entropy which helps in identifying an optimal path. Huffman coding selects an optimal path among multiple congestionfree alternate paths. The pseudocode for ACO clearly defined all the steps involved.
3.7. Time Complexity
The time complexity of the proposed ECAHA algorithm depends upon the complexity of Huffman coding and ant colony optimization. For the “” number of sensor nodes, the Huffman coding takes to control the congestion in the path of the network [10], whereas ant colony optimization selects the best route under duration, with defines the evaporation rate of pheromone [35]. Thus, overall time complexity of the proposed ECAHA is the summation of
4. Simulation and Result Discussion
In this section, we performed the extensive simulation using simulator MATLAB2017b of the proposed algorithm. The sensor nodes are in range from 20 to 100 randomly distributed in the 2D space of the wireless network. Base station is placed inside (40, 72) in the network which has capability of data gathering and query processing. The initial energy of node is joules. The pheromone released by ant is evaporation once in every 4 seconds, whereas the amount of search ant is set to be 6 and their maximum lifetime is 20. The maximum number of iterations or round is 1000. The remaining list of parameters used for simulations is shown in Table 2. The proposed algorithm is compared with the natureinspired algorithm ACSRO [19] and traditional congestion avoidance algorithm CODA [13]. The performance of the proposed algorithm is evaluated against the average throughput, average hopbyhop delay, and packet delivery ratio and node death percentage. These parameters define the rate of successful delivery of data over the total bandwidth, time delay in forwarding the packet from current hop to next forward hop, and ratio of the number of packets successfully received by receiver to the number of packets send from source. Further, comparative analysis of the ECAHA with stateofart algorithm is also evaluated.
4.1. Average Energy Consumption over Rounds
A comparison of energy consumption of the proposed ECAHA algorithm with stateofart algorithms over number of rounds is shown in Figure 4. It can be observed from the results, for the initial phase of rounds at 400, the energy consumption is 0.12 joules, 0.2 joules, and 0.51 joules for ECAHA, ACSRO [19] (natureinspired algorithm), and CODA, respectively. And further increase in the rounds up to 1000 rounds the ECAHA only consumes 1.4 joules, and ACSRO consumes 1.6 joules and worst performance shown by CODA that consumes around 1.9 joules. This is due to the fact that the proposed algorithm chose the next hop routing path based upon higher residual energy and less congested routing density based on Huffman coding and ant colony optimization, whereas ACSRO uses adaptive cuckoo search rate adjustment for the selection of next hop that performs better than CODA algorithm. It is also noted down that the worst performance is shown by without natureinspired algorithm, because it does not have any selection policy for the next hop routing.
4.2. Average Residual Energy of Nodes over Rounds
A comparison of average residual energy between ECAHA and the stateofart algorithm is presented in Figure 5. It is clearly observed from the result as the number of rounds increases, residual energy decreases for all the three algorithms, whereas ECAHA and ACSRO having residual energy 0.4 and 0.2 joules, respectively, for 1000 rounds. Further, it is noticeable that CODA algorithm exhausts all the available energy and in turn zero residual energy. This is because of proposed algorithm ECAHA selects the next hop for the routing purpose based on ant colony optimization, and priority is given to nodes having higher residual energy using Huffman coding, whereas ACSRO selects the path for packet delivery based upon residual energy and minimum hop distance but does not include the congestion density on the next available node rather the proposed ECAHA considered all the mentioned parameters. It is also worthy to note down that CODA algorithm shows poorest performance due to selection of the next route depends on minimum distance and does not consider residual energy of the next hop that in turns increases the overall energy consumption, and lifetime of the network is decreased.
4.3. Average Throughput over Source Data Rate
A comparison of average throughput of the proposed algorithm using 20 and 100 numbers of nodes in the network is shown in Figures 6(a) and 6(b), respectively. It is clearly observed from the result of Figure 6(a) that proposed algorithm ECAHA improves throughput about 18% comparing to ACSRO and 21% comparing to CODA algorithm, respectively, whereas average throughput in Figure 6(b) shows that all three stateofart algorithms in the beginning () provide nearly the same 50 packets per second delivery rate. With further increase in the source data rate, our proposed algorithm ECAHA increases the average throughput rapidly and reaches about 100 packets per second comparing to 91 packets per second and 89 packets per second for ACSRO and CODA, respectively. This is due to the fact that data transmission level congestion is controlled through the effective selection of next hop based on ant colony approach that provides alternate path having lower number of packets to forward. It can be also observed that average throughput in the network using CODA algorithm shows much variation than other stateofart algorithms. This can be attributed to the reason CODA algorithm consumes more energy to forward the packets and rarely selects optimal next hop for the forwarding. Thus, overall it is clearly proved that ECAHA is the best suitable algorithm for data forwarding in the wireless sensor network composed to 20 or more than 100 nodes environment. This can address the problem of congestion, missing packets in wireless sensor network, and upgradation of network throughput.
(a)
(b)
4.4. Average HopbyHop Delay over Source Data Rate
A comparison result of average hopbyhop delay between proposed algorithm ECAHA and stateofart algorithm is shown in Figures 7(a) and 7(b) in the scenario nodes 20 and 100, respectively. This metric plays very important to show the interpath interference control mechanism, overhead produce by the exchange of control packets, and linklayer transmission delay. It can be clearly observed from the result of Figure 7(a) that proposed algorithm ECAHA has less delay about 33% and 38% compared to ACSRO and CODA, respectively, for the source data rate 40 packets per seconds and onwards, whereas in Figure 7(b), the ECAHA delivers the packets to next hop faster about 39% comparing to ACSRO and 43% comparing to CODA. The reason behind is that proposed algorithm ECAHA exchanges minimum number of control packets for the selection of routing path. This can be attributing to the reasons of Huffman coding uses prioritybased heap for the selection for next hop based on distance between node and base station. Further, data routing path is balanced using ant colony approach that helps in minimizing the hopbyhop delay, whereas ACSRO and CODA are not able to optimize the number of control packet exchanges that inherently increases the interference too. This can be attributing to increase the congestion in the network in turn enhance the overall hopbyhop delay.
(a)
(b)
4.5. Packet Delivery Ratio over Number of Rounds
A comparison of packet delivery ratio over number of rounds between proposed algorithm ECAHA and the stateofart algorithm is presented in Figure 8. Packet delivery ratio turns out to be a throughput measurement parameter; the higher the packet delivery ratio, the higher the throughput in the network. It is clearly observed from the result on the increasing number of rounds the packet delivery ratio is higher about 95%98% for the proposed algorithm. This is due to the fact that proposed algorithm uses Huffman coding for the selection of routing path to deliver the packet with less congested path, whereas natureinspired ACSRO algorithm performs better than traditional congestion avoidance algorithm CODA with 35% higher packet delivery ratio. This is because of the ACSRO selects the next hop for the routing purpose based on adaptive cuckoo search rate adjustment for optimized congestion avoidance. Finally, it is observed that proposed algorithm ECAHA performs better than by 8% and 48% higher packet delivery with respect to ACSRO and CODA, respectively.
4.6. Node Death Percentage over Number of Rounds
A comparison of node death percentage over number of rounds between proposed algorithm ECAHA and the stateofart algorithm is presented in Figure 9. It is clearly observed from the result as the number of rounds increases, the death percentage of nodes also increases for all the three algorithms and all the nodes are dead at maximum 950 rounds. Further, it is noticeable that proposed ECAHAbased congestion control routing algorithm has 20% and 40% lower node death percentage ratio rather than ACSRO and CODA, respectively, at 600 rounds. It is also worthy to note that proposed algorithm ECAHA runs about 960 rounds, whereas ACSRO and CODA run only for 820 and 640 rounds. This is due to the fact ECAHA chose the routing path with lower congestion rate and having more residual energy compare to ACSRO and CODA. And the without natureinspiredbased algorithm CODA selects any node for data forwarding with consideration of hop distance and residual energy, which leads to worst performance, and lifetime of the network is decreased.
5. Conclusion and Future Scope
This paper proposed a congestion control algorithm ECAHA for wireless sensor networks that can be applied on most of the application like health care system, agriculture, and monitoring. The proposed algorithm uses ant colony approach using two algorithms for backward ant and forward ant algorithms to find more than one path for data routing. Further, Huffman coding was applied on the top to select one of the less congested paths as optimal data routing path from the sample space given by ant colony approach. The extensive performance results of the proposed algorithm outperform in terms of various parametric than stateofart algorithms. Thus, the proposed algorithm forwards the packets using optimal data routing path in the current round that decreases the hopbyhop delay, improves the packet delivery ratio, or decreases the node death percentage which ultimately improves the throughput of the network and lifetime, respectively. The only shortcoming of the proposed algorithm is that the network is increasing at a very fast rate but this approach is suitable for small search space. Therefore, in future, we will look forward for large search space. In the future, considering Internet of Things environment, we will include the more parameter such as dynamic traffic load or breakage in the link to find more optimal data routing path based on reinforcement learning approach.
Data Availability
The experimental data and associated settings will be made available to researchers and practitioners on individual request to the corresponding author, with the restrictions that it will solely be used for further research in literature progress, as the associated research data is being further utilized for development research by the team.
Conflicts of Interest
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is supported by the SC&SS, Jawaharlal Nehru University, New Delhi.