Abstract

Nowadays, wireless sensor networks (WSNs) emerge as an active research area in which challenging topics involve energy consumption, routing algorithms, selection of sensors location according to a given premise, robustness, efficiency, and so forth. Despite the open problems in WSNs, there are already a high number of applications available. In all cases for the design of any application, one of the main objectives is to keep the WSN alive and functional as long as possible. A key factor in this is the way the network is formed. This survey presents most recent formation techniques and mechanisms for the WSNs. In this paper, the reviewed works are classified into distributed and centralized techniques. The analysis is focused on whether a single or multiple sinks are employed, nodes are static or mobile, the formation is event detection based or not, and network backbone is formed or not. We focus on recent works and present a discussion of their advantages and drawbacks. Finally, the paper overviews a series of open issues which drive further research in the area.

1. Introduction

Despite the open research areas in wireless sensor networks (WSNs), there are already a high number of current problems in which these networks can be applied. Some application fields include tracking, monitoring, surveillance, building automation, military applications, and agriculture, among others. In all cases for the design of any application, one of the main objectives is to keep the WSN alive and functional as long as possible. A key factor in this is the way the network is formed. In fact, the topology is mostly defined based on the application environment and context. The sensor information is usually collected through the available gateways in a given topology. This information is then forwarded to a leader node or to a base station known as sink.

The design complexity of a WSN depends on the specific application requirements such as the number of nodes, the power consumption, the life span of the sensors, information to be sensed and its timing, geography of where the sensors are placed, the environment, and the context.

This survey presents most recent formation techniques and mechanisms for the WSNs. In this paper, the reviewed research works are classified into distributed and centralized techniques. In the former, nodes are autonomous and the communication is only between neighboring nodes while, for the latter, the network formation is controlled by a single device.

The analysis is focused on whether a single or multiple sinks are employed, nodes are static or mobile, the formation is event detection based or not, and network backbone is formed or not. The survey is dedicated to recent works and presents a discussion of their advantages and drawbacks.

This paper is organized as follows: Section 2 presents the WSN generalities and the way the reviewed works are classified according to several features. Section 3 focuses on the centralized networks classification; Section 4 describes the distributed networks classification. Section 5 shows the commonly used standards and protocols for WSN. Section 6 depicts the advantages and disadvantages of the reviewed works; finally, Section 7 presents the concluding remarks with a series of opened questions which drive further research in the area.

2. Wireless Sensor Networks

Wireless sensor networks (WSN) are composed of a finite set of sensor devices geographically distributed in a given indoor or outdoor environment (usually predefined). A WSN aims to gather environmental data and the node devices placement may be known or unknown a priori. Network nodes can have actual or logical communication with all devices; such a communication defines a topology according to the application. For instance, there can be a WSN with both types of topologies being the same (mesh, star, etc.). However, this may not be the case for all applications. The logical topology is mainly defined based on the nodes logical role (tasks, etc.). It can be either ad hoc or strategy based (self-organization, clustering, pheromone tracking, and so on). The strategy is defined based on the network available resources.

Centralized formation techniques are suitable for networks in which the processing power capacity relies mostly on a unique device. In such cases, this device is responsible for the processing, coordination, and management of the sensed information activities. It also forwards this data to a sink node (Figure 1). The main advantages of this approach are as follows:(i)Centralized schemes allow more efficient energy management (see Section 5).(ii)Roaming is allowed inside the network.(iii)Network coverage analysis is simplified.(iv)Context information availability allows a better application design (placement of nodes, application awareness, etc.).

In Distributed formation techniques, the information is managed by each node and decisions are locally taken and limited to its neighborhood (single-hop neighbors). The main characteristics of distributed networks include the following:(i)There are autonomous devices.(ii)Each node shares information to its neighborhood.(iii)It is suitable for distributed applications (multiagent systems, self-organized systems, etc.)(iv)The information is mainly forwarded to a single node.(v)Interconnection devices (routers, bridges, etc.) are not required.(vi)Their flexibility allows targeting harsh environments.

The complexity of the forwarding information process requires robust algorithms. The former have to assure the execution of specific tasks with comparable performance to the centralized solutions.

One of the most important distributed techniques in recent years has been self-organization. A sensor network using this strategy is able to achieve an emergent behavior in which nodes interact individually and coordinate autonomously (Figure 2). The target is to achieve tasks that exceed its individual capabilities as a single node. Examples of these techniques are found in nature (insect colonies, biological cells, the flock of birds, the foraging behavior of ants, etc.) [1, 2].

The protocols intended for distributed wireless sensor networks must be able to provide efficient energy consumption considering nodes mobility, environmental noise, limited batteries, and loss of messages, among others. This is a matter of discussion in the following section.

Figure 3 shows the taxonomy of our proposed classification. It can be seen that all WSN organization techniques can be classified into one of the discussed groups: centralized or distributed. The following sections present a further classification for each group and their associated main works.

3. Centralized Wireless Sensor Networks

Centralized networks take directions from a unique device. This central node is responsible for providing network operation services such as node localization, event detection, and traffic routing. A suitable logical topology for this approach is a star. The centralized networks can be classified according to how the information is processed. These groups include the following:(i)Single Sink. The objective of the formation strategy is to reduce the forwarding time and route the information towards a unique sink. The main drawback of single sink systems is the lack of redundancy.(ii)Multisink. Multiple sinks are employed for scenarios in which the previous tasks are distributed to several nodes. This is done for a number of reasons such as network density, coverage area, redundancy, distribution of traffic flows, network life span, and possible energy consumption.(iii)Multiple Task Devices. Recent research works suggest the use of auxiliary network devices; these devices can be responsible for doing a specific activity inside the network such as knowing the complete environment to define a route, control of nodes movements, and definition of a target node, to improve the overall WSN application performance.

A further classification can also be made according to the dynamics of the node roles. The classes are Hierarchical Networks, Static Networks, and Defined Operational Networks. A brief discussion of this type of networks is given below.

3.1. Hierarchical Networks

A sensor defines priorities according to its role in the network. Traffic forwarding nodes have a lower precedence than fully functional nodes (sense, coordinate, process, and forward information). The network control is performed in a hierarchical way and is defined based on the roles. This kind of networks is usually implemented using the 802.15.4 [3] protocol.

For instance, [4] presents a multisink environment architecture (ICatchYou) [5] based on the protocol 802.15.4. It employs a multihop forwarding strategy and addresses the sensor the localization problem. They proposed a centralized technique to guarantee high mobility between sink nodes.

Self-configuration is used to find the appropriate sink for the registration process; there are some metrics used for choosing appropriate sink, how the information is gathering, and so on. Each sensor node receives all messages directly through the sink node. They consider two scenarios; the first one is a closed one with obstacles and interference and the second one without obstacles or interference. The second scenario presents best results because nodes achieved a better performance with a higher distance.

In this proposal, the authors do not present final results and they do not guarantee the full functionality of the sensor in the environment, although multihop offers some advantages and it is also easier to guarantee an efficient fast handover between sink nodes. Their algorithms are inefficient because all nodes send broadcast messages and may cause a flooding of the network. The technique can be applied to mobile scenarios, but an implementation is not reported in this paper. Besides, energy consumption or scalability is not taken into account. In this work, only the link quality is considered, which makes inefficient the decision-making for choosing a sink node.

In other proposals, [6] presents a Tree-Based Routing Protocol (TBRP) where every node has the capability of environmental sensing and computation tasks or keeps communication with others nodes in the network. Nodes are mobile; the node movements are defined following a target. The routing algorithm is composed of different stages: the first one is the formation of the tree by broadcast messages, the second one is collection and transmission data and is handled by TDMA schedule, and the last one considers failures, energy level, or movement of the parent node.

TBRP protocol improves the lifetime of nodes and the network by moving the nodes to the next higher level when a threshold of energy has been reached. This algorithm works in a centralized way and the energy consumption is not considered when a node sends messages. This algorithm is compared with LEACH [7] even though this algorithm does not work with tree formation and TEEN [8] protocol. This algorithm can be also classified in either Routing Networks or Tree topology based strategies. Formation techniques under this protocol are addressed in Section 5.2.

3.2. Static Networks

Usually, nodes are placed in strategic positions before the application is launched. The aim is to provide a better data collection and processing performance. Formation techniques are discussed below.

In [9], an energy balancing multisink optimal solution is presented. The total number of displayed nodes is denoted as . The location of every node is known and the whole network is partitioned into disjoint clusters. The centroids of the clusters are considered in place of sinks positions. The node position is chosen according to some metrics. The clustering formation is based on Particle Swarm Optimization (PSO) [10]. The sensor network is represented as a connected graph , where is the set of vertices (sensors nodes) and is the set of edges (transmission links); sinks are predefined and nonmobile.

The objective is to find the optimal locations for every sink node, by minimizing the average sensor distance from the sink and maximizing one-hop connectivity of each sink placed in the network to reduce the consumed energy in the network and extending the network lifetime. The authors use the -mean algorithm for the clustering process, which is iteratively applied from a initial cluster center. Furthermore, an iterative parallel search algorithm based on a Particle Swarm Optimization strategy is used in which agents are defined as particles.

The set of experiments involves different quantities of sink nodes for computing the average sink nodes degree and the average hop count. The disadvantages of this algorithm are as follows: it spends a lot of energy for every position known in the environment, and the increasing number of clusters is proportional to the number of sinks.

In [11] is presented a routing strategy for a hybrid sensor network; three versions of routing strategies based on Bell-form algorithm are presented: centralized, semidistributed, and distributed. A strategy for an optimal placing of the sink, which is based on the -mean algorithm, is used. This proposal also belongs to routing based network class according to our classification, which is explained in detail in a posterior section.

3.3. Defined Operation Networks

The node behavior is defined while the network is working. The application starts once the nodes have detected an event, thus, the nodes forwarding its information to the sink objective node.

In [4], the ICatchYou architecture is proposed, which is classified as this kind of networks by its processing and also it is also classified by the used protocol during network formation (IEEE 802.15.4) explained in Section 3.1.

An adaptive learning scheme for load balancing schemes with zone partition in multisink WSNs (QAZP) is presented in [12]. A centralized Mobile Anchor (MA) agent, which is equipped with a directional antenna and a GPS device, is introduced in [13]; the MA agent is orientated at the intersection by drawing Voronoi Diagram. After location information of the MA is determined, the MA sends beacon signals to different sink nodes through directional antenna and the sensor nodes that receive the beacon signal can transmit the follow-up collecting sensing data to the nearest sink through hotspot devices; the hotspots are devices which notify other sensors which sensor is able to transmit its information. A machine learning process is applied to the MA to make it adaptable to any traffic pattern.

This proposal consists of a large number of sensor nodes, several sink nodes, and one MA. The characteristic of the MA affiliation makes the network capable of being partitioned into several regions according to the number of sinks; its location information is attained through GPS devices. There are defined movements for the MA: upper left-hand corner (highest residual energy) and lower right-hand corner; after the movement is done, the sensor node chooses another route to balance the load based on parameters like residual energy and hop distance.

The assumptions helping in this proposal are difficult to fulfill in a real implementation; the environment is always observable; good decisions are made and good behavior is expected, ideal environments without traffic or loss of messages are designed, and there are sensors with infinite energy. The MA is controllable and always predefined. Hotspots concentrate a large amount of information from the whole environment; data collection is taken from specific zones.

The concept of Dynamic Convoy Tree-Based Collaboration (DCTC) is introduced in [14]; the strategy works in a centralized way and it is stated as a multiple objective optimization problem. The aim is to find a convoy tree sequence with high tree coverage and low energy consumption. It finds a minimal sequence with a maximum coverage, in which they assume ideal communication. The authors proposed a conservative and prediction-based scheme for tree expansion and pruning, and also a sequential and localized reconfiguration scheme for tree reconfiguration is introduced.

Convoy tree is a moving tree which tracks a target; this target can move along the environment and the tree is dynamically configured to add or prune some nodes as the target moves. The overall function of the proposal is resumed as follows: first, the target goes into the detection region; then, sensor nodes detect the target and collaborate with each other to select a root and construct an initial convoy tree.

This proposal presents two algorithm versions: DCTC and O-DCTC (Optimal Dynamic Convoy Tree-Based Collaboration solution with dynamic programming): the first one is used when convoy tree is reconfigured and the target moves and the second one consists in the formulation of the optimization problem, which finds a min-cost convoy tree sequence with high tree coverage.

The Geographical Adaptive Fidelity (GAF) [15] protocol is used for energy saving. This protocol divides the network into defined grids (Figure 4); every node has communication with another pair of nodes directly. The responsible node for monitoring whether there is any event in the network is called Grid Head node and the remaining nodes are known as ordinary nodes and they wake up periodically.

This work only considers the problem of detecting a single mobile target at one time. The movements and event detection are in a predefined area; each sensor node has a GPS for a global position. They assume that the moving target trace is known a priori, and each node has knowledge about the network topology. They also consider ideal situations. Dijkstra algorithm [16] routing is used.

Two different schemes for reconfiguration are presented in [17] for addressing the Optimal Dynamic Convoy Tree-Based Collaboration (DCTC) problem [14]; a min-cost convoy tree sequence is proposed to formalize the problem. The reconfiguration schemes are Optimized Complete Reconfiguration (OCR) and Optimized Interception-Based Reconfiguration (OIR): the first one concerns the whole network and the second one only modifies the network in a local way. Finally, a comparison between these two schemes is presented.

Some assumptions are made: sensor nodes are stationary and they are aware of their own location by GPS device [18] or also use a triangulation technique [19]; both techniques are considered in this work. The main problem of the reconfiguration is to find a sequence with a minimum cost in the convoy tree. One of its principal weaknesses is the global job for energy saving, they use a GPS for every device, and they do not show a good performance when there is a dense network; the objective movements remain as predefined. They finally show a better performance using the combination of these techniques.

A multiple sink WSN and a topology configuration scheme that automatically reconfigures the network in case of node failures are proposed in [20]. The number of retransmissions caused by random losses of messages in wireless communication is calculated. Some assumptions are as follows: the network is static, the optimal positions of nodes are not considered, and the data is only transmitted to sink nodes; they did not consider transmission and reception of information between sensor nodes; fault node event originated in a fire or enemy attack occurs at a random time.

For a better performance, the path cost is calculated using the Signal to Noise-Ratio model (SNR is a measure used in science and engineering that compares the level of the desired signal to the level of background noise). The Wireless Spanning Tree Protocol [21] (WSTP) is used for routing and it is divided into different stages; the reconfiguration is proposed whether a node detects a failure when some parent node dies; then, the affected node searches for a new parent and connects to it. The main drawback of the WSTP strategy is that it does not consider neither the total communication cost nor the loss of communication with sink nodes during the procedure.

These are some examples that have had a recent impact on centralized WSNs, which have inspired to write this section and give some drawbacks and advantages. In the next section, we are going to present some important papers about distributed techniques where some of these present an implementation and good results.

4. Distributed Techniques for Wireless Sensor Network

4.1. Characteristics and Methods

Distributed techniques are used when the application has to preserve some properties, namely, energy saving, the number of connections, memory, and efficiency, among others, or when the information processing is inefficient in a centralized way. The distributed techniques have some special characteristics:(i)Independence. It is present when a user is the only one who chooses where the data will be stored and when the data can be modified or deleted. The information saved does not have any information dependency with other devices. The important decisions are based on the device data. This feature offers most of the time information support by an own server or one host provided by a supporting company.(ii)Integrity with respect to Other Services. Being present in this type of distributed techniques does not mean to give up to the integrity offered by the centralized models.(iii)Scalability. According to the application, scalability allows adding more nodes to the network without changes on the network performance, which means that this does not affect the rest of the network.(iv)Reduced Information Management. Networks are based on the local information knowledge, namely, neighbors.

As centralized networks, these types of networks are characterized by working with single or multisink environments. These networks are divided into different categories according to its application; they are Hierarchical Networks, By Application, and By Kind of Topology.

In this work, some features and evaluation metrics in the most used distributed network topologies are mentioned in [22]. The authors assume a large amount of nodes on each network, which are randomly deployed in a common area. Some assumptions are made: the base station is outside the area, where the nodes are deployed, sensor nodes are stationary, sensing data is done at a fixed rate, and all nodes have the same capabilities of communication and transmission.

Some of the addressed metrics are network lifetime, loss of messages, overhead, efficiency, latency, and reaction time (how much time it takes the data to get to the sink node). The considered topologies in this work are flat-, cluster-, tree-, and chain-based.

Metrics are divided into important critical issues:(i)Energy use is partitioned into efficient, energy distribution, average dissipated energy, resource expended per packet delivered, which is defined as the ratio between numbers of broken pairs to the total packets delivered, and number of packets before the partition, measured by the number of data packets sent and successfully delivered before a network partition.(ii)Network lifetime is determined from the instant when the failure of the first node occurs; other parameters can be also determined, namely, a percentage of node failures and the number of delivered packets in a certain time.(iii)Scalability measures how a protocol performs at varying the node density, the overall network size, or the number of data sources and sinks.(iv)Overhead and efficiency are determined by the routing protocol message cost, message lost, control overhead, average route, and so forth.(v)Temporal evaluation criteria are as follows. Latency and reaction time are some of the parameters involved in this issue.

Every topology can be well evaluated according to the application and the available resources in the network and metrics that will be applied. The performance evaluation of a topology with respect to others is presented below: according to the energy consumption, the chain-based topology saves more energy in comparison with the other topologies. Flat topology is the worst because it possesses large latency with low message losses; this technique does not take into account the energy constraints and it may cause implosion and overlap, but it is better, with respect to overhead, than others since it does not keep a defined structure.

Regarding reliability, the best topology is the cluster-based one, due to the easy reconfiguration, scalability, low latency, and energy saving but the energy dissipation rate is highly different from one sensor to another and network connectedness may not be guaranteed.

The chain-based topology is the most promising network for this study. The leader in a chain topology acts as the sink; it saves more energy than cluster-based topologies and offers a larger lifetime but spends much time in data collection and overhead is high. Tree topology saves more energy than cluster-based topology, but when the formation of the tree is being developed it is costly and consumes a lot of time, it is not resilient to node failures, power consumption is uneven across network nodes, and the tree maintenance is high.

Observations about the behavior of the different topologies are presented, but there are no cases of study for the behavior performance of the cited topologies; they do not consider reconfiguration techniques, the topology evaluation depends completely on the kind of application, and the results are not absolute.

4.2. Hierarchical Networks

Another work proposes AETOS (The Adaptative Epidemic Tree Overlay Service) [23], a new agent-based approach for building and maintaining robust tree topologies on-demand. They reactively rewire their connections to reflect changes in the environment. The interaction between the agents and the application is managed through another local agent called AETOS proxy. This proposal is focused on virtual tree topologies that are built on an underlying network infrastructure; they examine how the failures (the abstract state in which the overlay communication of a node is interrupted) of nodes influence the tree overlay and how the local software agents can cooperate to make the topology robust to failures.

The algorithm does not consider the optimum number of children that each node should retain for controlling the processing cost. The behavior of every agent in AETOS is competitive, which means continuously improving its position in the tree by choosing to connect with more robust agents, and this leads to change the proximity criteria; this implies a high energy consumption. Each agent in the tree, excluding the leaves, is assumed to have a number of children. This work shows local adaptive and reconfigurable agents; they cooperate and use self-organization strategy based in AETOS; they did not consider a multisink environment or energy constraints; also, they did not have reconfigurations for the principal node in the tree.

ECSA (Efficient Cluster-Based Self-Organization Algorithm) proposed by [24] for partitioning for WSN, giving a hierarchical organization, will be explained in Section 4.3.1.

4.3. Application Based Networks
4.3.1. Events Based Network

In this kind of works, the events are the key to start an execution or a formation. Examples of these works are presented below. In [25], there are evaluated and discussed the results and experiences further from implementing the reinforcement learning based multicast routing protocol (FROMS) in a test bench of ScatterWeb nodes.

FROMS is a multisource multisink routing approach that uses Q-learning to identify network routes to optimize the shortest path or the best energy efficiency. A good exploration/exploitation ratio ensures that routing costs are kept low by often using the currently best available route, which corresponds to the local minimum.

Also, they use different techniques with methodologies like Ants Colony Optimizations (ACO) [26]. The node device only uses the local information available to the number of hops to the sink node and finds the global optimum route. Even though they have a good application, they use one predefined node device for transmitting information to the whole network and they do not use node reconfiguration.

On the other hand, in [27], the aim is the relation between self-organization in a multiagent system and thermodynamic concept that provides analytical, designing, and operating agent system. The example of such a system is the self-organization of an insect colony through pheromone-based coordination. Pheromones are scent markers that insects use in two ways. First, they place pheromones in the environment to record their state, and, then, they adjust their movements to the gradient of the pheromone field.

The environment, in which pheromones are left, plays a critical role in such a system. These pheromones are placed with the same flavor from different ants, thus providing a form of data fusion across multiple agents at different times; these pheromones are evaporating thus forgetting obsolete information; the evaporation provides a third function: disseminating information from nearby locations.

There are two agents: one stationary and one mobile; neither knows the location of the other and desires to be together; the mobile agent could travel once it knows where to go and how to get to the stationary agent.

In order to be effective, multiagent systems must yield coordinated behavior from individually autonomous actions. This problem is inappropriate for complex applications and they lose control time. The proposal has not been implemented in real nodes and the authors did not present results. Other disadvantages of this approach are the big amount of processing and the energy consumption of nodes which must know the whole environment.

4.3.2. Routing Based Network

These works are focused on finding the best path to get the sink node. The route is chosen based on different metrics evaluating the nodes with more energy, the number of hops, distance, the number of visited nodes, and so on.

A bioinspired self-organized algorithm is proposed by [28] that would also meet the enhanced sensor network requirements, including energy consumption, success rate, and time. This paper also used an Ant Colony Optimization (ACO) algorithm, a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs for an optimum route discovery in a multihop WSN. ACO works with defined ants, in which a new kind of ant is introduced (random ants) whose main task is to dissipate information gathered at the nodes among other neighboring nodes. This proposal works in a distributed way to collect data and/or detect an event.

Routing is achieved through probabilistic decision rules and a self-organized strategy. Results are based on NS2 simulator [29]; nodes have bidirectional communication; the weight of a link to transmit information is proportional to the power consumption of a node; thus, a large amount of energy can be depleted. The convergence time is not considered.

When one packet passes through a node with a certain speed, the node takes the first step to gather all the ant agents into a buffer and then selects the optimal path from its routing table to transfer packets; cycles are avoided by adding a unique ID to the paths. It can be noticed that this strategy may need a lot of memory, which is one of the main constraints on sensor networks. The paper does not include graphical results about the performance of the algorithm.

Another work presented in [30] is proposing a new system architecture for a multisink environment and a new routing algorithms: ELBR (Energy Level Based Routing) and PBR (Primary Based Routing). A new definition for energy level is introduced; it is related to the number of times a node can transmit data considering the energy consumption for every message sent. The first routing algorithm (ELBR) calculates the path energy level and chooses the maximum energy level path to transmit data; PBR algorithm takes into account the energy level and the energy cost of the routing path. Some drawbacks are as follows: only one sink node can receive information at one time, reconfiguration is not considered, and a node is preselected to send information.

A proposal about a multisink and load balance routing algorithm (MSLBR) is presented in [31] and it is compared with ELBR and PBR (previous work). The authors have introduced a new concept for nodes in one-hop communication with the sink nodes called deputies. They consider that deputies nodes are one kind of sink nodes and that every deputy node sends a broadcast message in a round robin way. The algorithm is based on hardware implementation. A bottleneck problem can exist in deputy nodes caused by flooding of messages and their energy can be quickly consumed. Reconfiguration is not considered and nodes can die easily. The same data is transmitted to all sink nodes; this may cause an excessive energy loss and data redundancy; the energy consumption is not considered.

A path bottleneck oriented and energy cost based routing scheme is proposed by [11]; in this work, there are designed (a) centralized version for sensor networks with global topology information, (b) a semidistributed version to further improve energy efficiency, and (c) a fully distributed version to support large-scale sensor networks. Sensor nodes dissipate energy only in data transmission; a multisink environment is considered. The architecture is responsible for sending data via the internet to the base station and assigns activities to sink nodes. Consecutively, sink nodes send instructions to nodes in the environment via multihop paths.

The base station has the complete knowledge of the topology and the remaining energy of all the sensors; sink nodes calculate the appropriate routing to get to a specific node. A modified version of Bellman-Ford Algorithm [32] is used for routing, and the -means algorithm is used for optimal deployment of the sink nodes. It is not explained how exactly the distributed algorithm works; the authors do not take into account reconfiguration and data seems to be forwarded always following the same route.

An Information Selection Branch Grow (ISBG) algorithm is proposed in [33], which achieves a higher network lifetime and reduces the end-to-end network delay. The base station is defined as a sensor node that is connected to a gateway (data sink) with a wired cable; the nodes cannot be selected as the base station; it is assumed that every base station has an unlimited supply of power and possesses high computational capabilities. The proposed algorithm uses a tree topology and develops branches where leaf nodes are closer to the base station according to the minimum number of hops; the base station is chosen according to metrics considering lightest weight, the smaller number of child nodes, the minimum degree of freedom, and so on.

The aim is to build a balanced tree in order to achieve energy balancing. For this purpose, a balanced criterion is considered when all branches of the base station have an equal number of child nodes, the algorithm selects the potential branch to grow, and all nodes send broadcast with its neighboring information eventually. In this proposal, any loose of messages is not considered. Every base station is treated as the center of a grid and the connection is made by cables; in a real scenario, this can be done only where there is a good environment.

4.4. Topology Based Networks

In this subsection, the most common topologies used in sensor networks are considered. Also, another kind of topologies is analyzed where the node behavior affects the performance of the network. Furthermore, some metrics are presented as the emergency property of this behavior; such metrics are reliability, energy consumption, and latency.

4.4.1. Cluster-Based Formation

Cluster-based control structures allow a more efficient use of resources. A hierarchical view of the created network through clustering decreases the computational complexity in the formation of the underlying network. This is especially true in sensor networks that are expected to consist of a large number of individual nodes.

On a topological level, clustering is achieved by grouping nodes inside a certain transmission area. A designed leader node controls this group of nodes usually known as Cluster Head (CH) or a leader node (Figure 5). A leader node is selected according to a weight that may correspond to a node capability to perform additional duties. It can be determined by taking into consideration aspects such as node residual energy, memory amount, processing capabilities, and the number of neighbors. Usually, the weights are computed locally in each node, and they may depend on the application where the structure is used.

In [34], a cluster strategy is proposed with the 802.15.4 protocol; this work is explained in Section 5.2.

Lehsaini et al. [24] propose an Efficient Cluster-Based Self-Organization Algorithm (ECSA); the strategy is compared with the LEACH strategy [35]. The assumptions are as follows: every node has a unique identifier, all links are bidirectional, two-density neighborhood is used, sensor nodes are almost stable in a reasonable period of time during the clustering process, and each node has a generic weight.

The principal proposal in this work is a randomized distributed algorithm where each sensor uses a weight criterion to decide whether to be a CH in its two-hop neighborhood. ECSA is performed in two phases: setup and reaffiliation phase. This work can be classified into a hierarchical organization; the cluster size relies between two thresholds (ThreshUpper and ThreshLower) for maximum and minimum size, respectively, but the size can change with some exceptions. The selection of CH is made periodically after every round on the setup phase. The maximum number of clusters is four according to a study made by Heinzelman et al. [7].

This algorithm generates a little number of balanced clusters and maximizes the lifetime of the network. The execution time can be reduced for a better performance; connectivity of all nodes is not assured. If the time in every setup is short, the CH nodes would be changing frequently and energy consumption can grow up. This algorithm keeps stable nodes during a certain time.

There are other works where self-organization is the principal activity performed. For instance, [36] proposes a self-organization algorithm for robust networking on wireless devices. This strategy allows reducing complexity topology construction and energy consumption. In this project, each node is modeled as an agent and it is assumed that every node knows its one-hop neighbors; under this approach, a node can move, arrive, or leave the network.

Nodes of the cluster play different roles according to their responsibilities in the cluster. The topology may change often due to the fact that nodes can join and leave the network because they change their transmission range, or they are mobile. In this work, some gateway agents will turn off to reduce energy consumption when some leader agents detect more than one gateway connecting the same leader agents. Every node defines and implements local decision rules.

The strategy consists in dividing the network into clusters for saving energy and giving maintenance to the whole network based on self-organization strategy. The main features of such algorithm are local information, distributed strategy, and emergent node behavior. The principal weakness of this proposal is the assumption of ideal communication and the algorithm convergence is never mentioned.

Furthermore, an interaction of the self-organization algorithm and distributed detection processing is presented by [27]. This work includes systems architecture and tradeoffs wireless network protocols, naming, and routing. One of the primary performance metrics should be the ability of the sensor network to detect events of interest. The self-organization part is based on LCA (Linked Cluster Algorithm) proposed by [37] and designed for small networks (one hundred sensor nodes), while the distributed detection is based on a robust version of the parley algorithm designed by [38] where the total number of sensor nodes is assumed to be fixed and different roles are used.

During an iteration of a parley algorithm, each node in a cluster sends a message that is received by all the other nodes in the cluster (relayed through the CH if necessary). They use probabilities for a soft decision when the observations are independent and identically distributed; the underlying probability density functions are known. Closer sensors to the emitter have better data reception, and their decisions have more weight in the distributed detection process.

The detection performance of the algorithm allows matching the obtained information such as a centralized detector having access to all the observations. This algorithm has some disadvantages such as the limitation to support one hundred nodes; also, it is not specified how the routing and self-organization are made. This work is actually the combination of two existent works and any improvement is reported. Another limitation is that the nodes can have communication with its neighbors inside the cluster without considering the hierarchical topology.

Another work proposing a clustering algorithm is presented in [39]; it includes merging and dismissing processes. The merging process is used to remove overlapped areas of several clusters and looks for an optimum number of clusters while the dismissing process removes redundant head nodes. A better performance is obtained compared with two clustering algorithms called ACE (Algorithm for Cluster Establishment) [40] and SOS (Self-Organization Sensor) [41].

Nodes in network wake up in a random sequence just after the network is activated. Nodes can change their state whether occurs some change (leave or add nodes) in the network. Nodes may die and disappear from the network due to some reasons such as energy depletion or system failure. The algorithm works to maintain and optimize its performance during the network working time; in this stage, it is decided if a node executes dismissing or merging process. Some disadvantages are as follows: (a) it is impossible to know the correct number of clusters without an exhaustive search, (b) only messages are reduced in one step, and (c) large amount of energy is used for dismissing and merging process.

In [42], a clustering strategy using protocol 802.15.4 is presented and provides a tree formation over the clustering result with sink node as the root of the tree. Every sensor measures the temperature and sends it to its leader nodes. This work is explained in the next section in detail.

4.4.2. Tree Formation

In [43], the main objective is to find settings for each sensor node that optimize certain task level using QoS metrics. The network has a specific task (tracking a target or creating a map of an area based on measurement data). The assumptions are as follows: all nodes have similar communication capabilities, there is a sink node, and a routing tree is used to connect each node to the sink.

The process of configuring a WSN consists of a defined number of phases. The first configuration phase is the construction of the tree in terms of the task quality running on the network. The sensors are able to flexibly trade reconfiguration cost (time and energy) for quality to match the demands of the application. The QoS Optimization phase is responsible for finding a Pareto-optimal set of configurations for the parameters of all nodes in the network, in terms of a number of quality metrics. The sink node movements are considered for reconfiguration.

The main disadvantages are as follows: (a) the best reconfiguration choice of the method and deviation value heavily that depends on the configuration of the application, (b) the sink behavior, and (c) configuration requirements. Energy consumption is not considered for reconfiguration problem with the mobile sink (connectivity of nodes). Besides the fact that the connectivity of the nodes is one-hop with the sink node, this can cause a lot of traffic and loss of messages without a synchronization schedule; there is only a sink in the whole environment.

Another example of this kind of networks is presented by [14] where an Optimizing Tree Reconfiguration for Mobile Target Tracking in sensor networks is introduced. Furthermore, a multiple sink wireless sensor network using a Spanning Tree-Based Topology configuration is presented by [20] which has been explained in Section 3.3.

A concept of Virtual Sensor Networks (VSN) to provide protocol support for the formation, usage, adaptation, and maintenance of subsets of sensors collaborating on specific tasks was presented by [44].

A VSN is formed by a subset of sensor nodes of a WSN, with the subset dedicated to a certain task or an application at a given time. Thus, the remaining nodes which do not belong to the formation provide support functionality to create, maintain, and operate the VSN. Thus, VSN formation depends on the remaining nodes providing VSN support functionality to create, maintain, and operate VSNs. As the nodes in a VSN may be distributed over the virtual network, they may not be able to communicate directly with each other.

The major functions of VSN can be divided into two categories: VSN maintenance and membership maintenance. The membership in a VSN is dynamic, and the communications among VSN nodes frequently rely on whether or not it is currently a member of a VSN. The VSN maintenance functions include the management of nodes entering and leaving VSN; a broadcast message is able to join two former VSNs, splitting VSNs, and originating contours of boundaries. There are some disadvantages such as the inclusion of nodes which do not execute any activity in the whole network, which can be inactive all the time, and the energy depletion for VSN formation is not taken into account.

In [33], an extension of the Tree-Based routing algorithm called Information Selection Branch Grow (ISBG) is presented for energy-efficient data aggregation routing in a grid environment. It performs a balance of energy in all network thought base station nodes; the idea is minimizing end-to-end network delays by developing branches where the leaf nodes have a minimum number of hops from the base station. This work is better explained in the previous section.

4.4.3. Cluster-Tree Formation

Cluster tree formation is one of the most used recent approaches which allow combining the best of cluster and tree formation strategies. Cluster formation includes groups of nodes with similar characteristics or common metrics; on the one hand, a node leads the group usually known as Cluster Head (CH) and the communication can be defined by the application or by the role of the nodes; on the other hand, tree formation allows deleting redundant flows.

The tree formation is based on metrics such as the residual energy of a node, the number of connections, and distance between node devices or between one node and the target. The functionality of this strategy is defined once the cluster is made; usually, the tree formation is launched over the cluster formation. The communication strategy can be modified according to the application needs. Some examples of this methodology are presented below.

A Top-Down Clustering (TDC) strategy with a tree formation algorithm is presented in [45], which does not depend on neighborhood information, location awareness, time synchronization, or network topology. The clusters have some properties like a constraint on the number of members; a member only can belong to only one cluster; the clusters have a threshold to define the maximum number of hops of a node from the Cluster Head.

Connectivity considers the number of dead nodes. The proposed work uses breadth-first search and deep first search for balancing the created cluster over the whole tree. They propose GTC, a Generic Top-Down Cluster, which is presented for better clustering solutions. The solution generated by the algorithm depends on the implementation of some functions at the same time, and tree depth distribution depends on the number of sensors.

Parameters, such as communication range, the number of nodes in the network, the number of CHs at each level, the maximum number of hops to a child node from the CH, and the location of the root node, can change. In this paper, reconfiguration or failure recovery is not considered; the time of tree formation can be reduced but without ensuring optimum formations of the tree; also the tree formation is launched by leader nodes. There are some drawbacks when breadth-first uses one-hop neighboring and depth first uses two-hops neighboring: the number of clusters can be reduced, and there are too many clusters with only a few member nodes that can generate the loss of messages.

A design and implementation of a novel wireless sensor network technique, using real devices (Freescale MC1321X), were developed in [42]. The proposed network formation strategy consists of two stages: the first one performs network formation with a clustering strategy under a policy of power consumption reduction, and the second one is the tree formation strategy and the measurement and collection of sensory data over the built backbone. The sink node is the root of the tree (level 0); then, levels are designated and the relationship successor/predecessor is established. Sensing and collection of temperature are performed and the sensor data is forwarded from member nodes towards sink node.

The implementation was tested in indoor and outdoor real scenarios taking into account obstacles such as people, devices between nodes, and loss messages; they use six sensors at a time and a good performance is obtained. The inconvenience of this approach is that the configuration can cause a bottleneck in the network.

A procedure for simulation and analysis of a formation under IEEE 802.15.4 protocol using different network settings with single sink and multisink scenarios is the main contribution presented by [34]. Levels in the tree are defined as the distances in terms of hops, from the nodes to the relevant sink. Figure 6 presents a cluster tree topology with the IEEE 802.15.4 protocol. This protocol uses 3 different kinds of roles: PAN coordinator, FFD (Full Function Device), and RFD (Reduced Function Device), explained in the next section.

For simulations, this work uses NS2 platform because it includes implementations of the IEEE 802.15.4 physical and MAC layers. The sensors are deployed in a square area; all nodes are static and they consider indoor scenarios WSNs used in small offices. The sink node is located in the center of the area. The number of generated hops relays on factors such as collisions, link quality, and sensory data. In this strategy, the number of children per node is controlled by adding some coordinator rules, and an appropriate value of the maximum tree depth for a better performance.

There are some drawbacks in the reported simulations: the complete connectivity of the network is not assured, results or implementations are not shown, and reconfiguration or energy constraints are not taken into account.

There are other proposals in which more than two topologies are combined, but the implementation is difficult. For instance, three different topologies are proposed in [46]; such topologies are regular hexagon (series of an adjacent grid of regular hexagon), plane grid (regular adjacent quadrangle), and equilateral triangle (regular adjacent equilateral triangles) models. For every topology, there are different configured sensor nodes; their activities and the execution time depend on the active topology.

5. Protocols in Wireless Sensor Networks

Protocols are important components of strategies for sensor networking, especially in the communication of sensor nodes. In this section, we are going to present a brief overview of important protocols used for wireless communication.

The protocols are divided into layers and sublayers; these allow deciding how and when to receive and transmit, route, and process the information on every device available to perform any task. Data link layer receives and processes the information and then sends it to another sensor device. This layer provides the management functionalities for data transmission and possible error corrections. It is divided into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC).

LLC sublayer acts as an interface between the MAC layer and networking activities; it provides flow and error control and it is responsible for data transmission between devices on a network. Some protocols used in this sublayer are 802.3/Ethernet, 802.5, 802.11, and FDDI (Fiber Distributed Data Interface) [47].

MAC sublayer is in charge of the access control to the environment and is responsible for package transmitting, data frame validation, error checking on transmissions, transmission rate, flow control, message acknowledges, and so forth. In this sublayer, there is a direct influence about how the node accesses the environment to obtain available information about the routes. These protocols are grouped into two basic classes, which are as follows:(i)Slots Based or Slotted Protocols. They divide time into intervals (frames or slots); the states of the node are transmit, receive, or turn off. Synchronization times are used to manage these states. The synchronization and maintenance costs penalize the energy consumption and the bandwidth. Some of them are TDMA, IEEE 802.15.4, S-MAC, and T-MAC protocols, among others.(ii)Sampling Based Protocols. Opposite to slotted protocols, these protocols are turned off most of the time and only turned on during specific periods of time, searching activity in the channel; if some action is detected, then they start receiving data; otherwise, they turn off again for energy saving purposes. Detection may be based on the channel energy level or on the carrier detection. These types of protocols are flexible, and in the most of the cases they allow the communication to any sensor inside their scope; sometimes communication is not possible due to the lack of synchronization. Examples of these protocols include Aloha, B-MAC, WiseMAC, the ChipconCC2500 transceiver, and the Berkeley platform.

The slotted protocols are the most common used on WSN. In this survey, only slotted protocols are considered; some examples of these are summarized below.

5.1. Time Division Multiple Access (TDMA)

The TDMA principle is rather simple. Traditionally, voice channels have been created dividing the radio spectrum into radio frequency (RF) carriers (channels), by using a duplex channel. This technique is known as FDMA (Frequency Division Multiple Access). TDMA divides the RF carriers in a repeated succession of small slots of time (channels). A frame is a succession of time intervals; transmission is organized in frames with a Ti duration; the interval length is defined by . Information is transmitted as a burst of bits. Each conversation employs just one time slot; thus, instead of having only one conversation, each RF carrier transports several conversations [48, 49].

5.2. ZigBee/802.15.4

IEEE 802.15.4 has been considered as a standard since 2003; it was created as a response to the need for a sensor network protocol with low energy consumption, usually in WPAN (Wireless Personal Area Network); it is flexible and with a small bandwidth [3, 50, 51]. This protocol supports just two types of topologies:(i)Star Topology. It is used for low power networks implementation.(ii)Peer-to-Peer Topology. It is used for the implementation of wide and precise networks.

This protocol works with three types of roles, each one of them has specific functionalities according to the network topology, and these are as follows:(i)RFD (Reduced Function Device) is limited to the star topology; nodes communicate only with the network coordinators; they cannot play the role of network coordinator. These are simple devices with limited resources and communication requirements. They can only communicate with FDDs.(ii)FFD (Full Function Device) is able to perform any task and it can have communication with all nodes in the network. FDD is chosen to be a coordinator.(iii)PAN (Personal Area Network) acts as a router and manages the network load; it is an FDD device. This node acts as the sink and the root of the tree in tree formation and is an FFD device.

Figure 7 shows the relationship between ZigBee and IEEE 802.15.4 protocol. ZigBee is composed by IEEE 802.15.4 in the lower physical and MAC layers; the upper layers define the way of communication of a node.

The most important characteristics of the MAC layer are as follows: it allows association/dissociation, data frame delivery acknowledge (ACK), channel access mechanism, data frame validation, time slots for robust control, guides control (Beacon), and channel scanning. The MAC service management is accessed via the MAC layer identity management (MLME-SAP). The general format of MAC frames is designed to be very flexible and adjustable to the needs of the different applications; IEEE 802.15.4 is able for working with diverse network topologies.

In general, the IEEE 802.15.4 standard provides eight security levels defined to protect the frame generated at the MAC layer in different manners. They include unsecured, only encrypted, only authenticated, and encryption with authentication configurations. When the unsecured level is enabled, neither data confidentiality nor message integrity is provided. In other cases, the data encryption and the authentication of messages are provided by means of AES and AES-CBC techniques, respectively. It is possible to offer a specific service to each kind of packet. However, the selection of the security level and the definition of other parameters required for performing security procedures have to be handled by an upper layer and then communicated to the MAC entity through dedicated primitives [52].

To WPAN discovery and detection of devices, IEEE 802.15.4 uses beacon frames launched by the network coordinator. Devices can work in two ways, explained as follows:(i)Beacon-Enabled. The network coordinator sends frames periodically; the network is detected via the communication channel spying (passive scan).(ii)Non-Beacon-Enabled. Frames are sent only when requested by beacon request command frame (active scan).

ZigBee standard focuses on a market segment not attended by the existing standards, with low data transmission and low connectivity service cycle. The main reason for promoting a new protocol as standard is allowing interoperability operation between devices manufactured by different companies. For instance, [53] evaluates the impact of mobile nodes with the 802.15.4 standard; FFD devices are used as sink and intermediate nodes; RFD devices are leaf nodes or final devices. The network coordinator is selected according to the highest average energy level from the energy detection (ED) procedure. They use different kinds of sensors in the environment; there are nodes which measure temperature, moisture, and luminosity.

The percentage of connected nodes is studied with sink nodes mobility; reconfiguration is considered. The scenarios and the network performance are evaluated through simulations. The essential node characteristics such as noise, significant message loss, and delay, are not considered; it is not possible to know if the implementation will be working for a reasonable time, even though they did not assure the connectivity of the network.

In [54], a mathematical formulation used to optimize the average number of children per parent and the number of levels in one tree (tree height) through maximization of the network association probability is proposed. The topology formation is based on IEEE 802.15.4 protocol. The authors executed the algorithm times; in each time an independent topology is created, having fixed nodes and sinks on the network (multisink environment). A completely random distribution in space Complete Spatial Randomness (CSR) is assumed to derive an average number of children per node and maximize the number of levels. The energy consumption and reconfiguration are not considered.

In [34], a topology formation using IEEE 802.15.4 known as cluster tree characterization using a multisink environment is presented, explained above in Section 4.4.3.

5.3. Sensor Medium Access Control (SMAC)

S-MAC is based on slots and defines stages (listen and sleep) for every sensor in which the sensor can save energy. A node is available to perform the following tasks:(i)A sensor node is in a sleep period, in which it turns off and sets its timer to be awakened after a certain amount of time.(ii)Once the timer ends, the node wakes up.(iii)The time that a node remains awaken depends on the application and the users.(iv)Neighboring nodes are synchronized together.

MAC sensor is the first MAC layer protocol developed for sensor networks; this layer has some energy limitations, as expected. S-MAC is based on the RTS-CTS (Request to Send/Clear to Send) scheme for avoiding the hidden terminal problem. The transmission/reception mode is switched alternatively randomly with the help of a timer (Figure 8).

The aim of S-Mac protocol is preserving the energy turning on the nodes during the term work cycle to perform its defined activities and turning off the nodes when this cycle ends. The perception radios of the nodes are always in listening mode to possible emergent events or requests [55].

In order to transmit information, a node has to compete for the medium and if it is necessary, it changes its tasks via broadcasting. When a transmission starts, it cannot be interrupted before it finishes. In fact, each node keeps a calendar to synchronize its transmissions and to avoid excessive loss of messages. SMAC has a set of established rules for a node which tries to enter or to leave the network; these rules allow having a better control and support over the nodes in the network. SMAC can be integrated to paradigms as directed diffusion [56].

Figure 9 presents an example of reception/transmission synchronization times with 3 sender nodes and 1 receiver node presented in [57]. Every node has different transmission time; the receiver defines the listen and sleep times according to a synchronization schedule to avoid loose of messages.

S-MAC protocol evaluation is presented in [57]. S-MAC protocol has the ability to make trade-offs between energy and latency according to traffic conditions.

Some related papers and protocols define three principal states for a node; these states are as follows:(i)Active. The sensor node remains in this state while it is working, even though any appeal is solicited; this state has the benefit of time availability to respond to any request, which leads to the unnecessary energy waste.(ii)Proactive. A sensor node only works when it is necessary or by a determined period of time, allowing energy saving. This kind of sensor consumes energy only when it is necessary, suspending all action when it is not active; this behavior allows lifetime extension.(iii)Inactive or Monitor. The sensor node monitors the environment and the communication channel waits for instructions. The information is only forwarded until it is requested [58].

Protocols based on sampling cannot be applied for wireless sensor networks; for this reason, they are not included in this survey.

6. Advantages and Limitations of Wireless Sensor Networks

Wireless sensor networks are very useful in applications where direct interaction with humans is difficult, namely, surveillance, monitoring, environmental protection, routing, event detection, multiagent system, and some others where sensors play an important role.

There are other properties that emerge from every network behavior, namely, robustness, reliability, communication between devices, transmissions time, and operational safety. Sensor nodes have some inherent constraints such as energy limitation and memory capabilities. Sensor nodes are expected to operate autonomously for a long time with minimum failures in all environments in a centralized or distributed way.

We have presented some of the outstanding works in wireless sensor networks followed by the most important advantages and disadvantages of each contribution. This section provides an overview of the advantages and drawbacks of the main centralized and distributed techniques below.

6.1. Advantages and Drawbacks in a Centralized Technique

The main advantages of centralized techniques are as follows: they are handled by a unique central device which knows the whole environment and the positions of all the deployed devices. Normally, the source of an event is known and the information is sent to a specific target or sink; there are no transmissions or reception conflicts because the central node coordinates every node.

The routing is easy to calculate and the best path can be chosen considering the entire environment; the optimal nodes and sink positions can be calculated taking into account metrics such as the distance between nodes, the number of nodes in the environment, the amount of energy of every node, and the number of hops.

Among the drawbacks of this kind of techniques, we can find excessive energy consumption, given that every time the nodes has to transmit something because nodes have to know to which node it is to address the message. Usually, GPS or triangulation techniques are used on every node to localize them, which implies a lot of processing and energy depletion. Memory constraints are not taken into account and, usually, these techniques suppose ideal behaviors of nodes in the network, which implies a loss of messages; besides, obstacles and interferences are not considered.

In these techniques, reconfiguration is easy to implement but requires more network resources with a high energy cost. The network does not support a high density of nodes, because of the large amount of information generated in the network. The connectivity of the network is not always assured because in this kind of applications usually it is decided to connect defined nodes instead of the complete network; robustness and reliability depend on the application.

The central device is responsible for repairing a failure; the failure recovery is used to be difficult for some nodes since full recovery is required; besides, the central device requires all information on the environment. When the central device fails, the network is broken.

6.2. Advantages and Limitations of Distributed Techniques

A distributed technique is often used when the application has to handle a lot of information and it is convenient to have some redundancy and reliability of the information. In these techniques, the advantages and disadvantages are defined according to the devices, resources, and the environment.

The main advantages are as follows: the information is local; this means that a node only keeps information of its neighborhood (one- or two-hop neighbors). Distributed algorithms are considered scalable. Reconfiguration is made locally on the affected part; since nodes are autonomous, the decisions are made by every node according to its position or its activities. The priorities and available information in the network are defined by every role. When a node dies, the network will remain in operation and the performance is not affected considerably. The distributed approach allows dealing with noisy environments including obstacles. The energy consumption is reduced by every node; usually, the routing starts whether an event is detected or there is a target to follow; this implies that there is no unnecessary depletion of energy before the procedure starts.

Some limitations of the distributed techniques are as follows: the connectivity of the entire network cannot be assured, because nodes only have local information; when the transmission is by multihop and there is only a sink node, bottlenecks can arise; nodes mobility requires more energy; finally, the network stops when there is only one sink node.

Some metrics can affect the performance of centralized and distributed networks such as the number of hops to get to a target or specific device, the number of retransmissions, the flow rate, the link quality, and the number of devices.

7. Conclusions

In this paper, relevant works on wireless sensor networks (WSN) have been reviewed. It presents the evolution, design, and implementation of some important WSN techniques in the last years and the most used protocols and standards to improve the sensor applications.

The main characteristics of every technique have been discussed. The analysis was focused on whether a single or multiple sinks are employed, nodes are static or mobile, the formation is event detection based or not, and network backbone is formed or not. We have pointed out advantages and drawbacks for every paper present, with the aim of improving and giving a support of the weaknesses and fortress of the used metrics. Centralized and distributed techniques take into account conditions for its application, namely, collision over the wireless medium, traffic, failures in the medium access, loss of messages, the size of a network, and so on.

We have remarked that distributed solutions are preferred over centralized ones. Distributed techniques support scalability, autonomous nodes, deployment, and elimination of nodes; also, it is possible to use self-organization strategies inspired from the nature in which the information is shared only with neighbor nodes, whilst on centralized techniques there are no transmission or reception conflicts because the central node coordinates every node.

Some questions arise from this survey, which might draw our attention in an application being developed: how can we be sure we have enough devices in our application? What kind of technique is better to apply: centralized or distributed? Since there is no a framework which describes a precise way to improve our application, how can we know the technique that we are using is the correct one? Does the combination of techniques assure a better performance? Finally, we present in Table 1 a summary of the advantages and issues to improve the presented techniques and the main classification used in this survey about sensor networks.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work has been partially supported by CONACYT, Grant no. 243189 (M. Carlos-Mancilla).