Abstract

In recent times, there have been many developments in wireless sensor network (WSN) technologies using coding theory. Fast and efficient protection schemes for data transfer over the WSN are some of the issues in coding theory. This paper reviews the issues related to the application of the joint rateless-network coding (RNC) within the WSN in the context of packet protection. The RNC is a method in which any node in the network is allowed to encode and decode the transmitted data in order to construct a robust network, improve network throughput, and decrease delays. To the best of our knowledge, there has been no comprehensive discussion about RNC. To begin with, this paper briefly describes the concept of packet protection using network coding and rateless codes. We therefore discuss the applications of RNC for improving the capability of packet protection. Several works related to this issue are discussed. Finally, the paper concludes that the RNC-based packet protection scheme is able to improve the packet reception rate and suggests future studies to enhance the capability of RNC protection.

1. Introduction

Since the development of wireless sensor network (WSN) technology, many research communities have concentrated on the development of that technology for many purposes, such as monitoring equipment, health care, and military applications [1]. The use of WSN technology is not limited to the simple applications involving the low size of data, such as temperature and water level; it has been used for complex functions involving larger sizes of data, such as sound, image, and video [2]. Moreover, many applications demand real-time response, especially the multimedia applications. Hence, one of the research targets is to find possible ways for larger size data to be transferred through the WSN reliably as well as approaching real-time responses characteristics. Figure 1 shows the distribution of sensor nodes in the sensor field, which is an example of topology of the system for transfer of data.

The transmission of a large size of data is not a trivial task since it involves complex processes and computations. Furthermore, an extremely high packet reception rate is required in order to achieve real-time response. In wireless networks, the transmission of data will be affected by the channel impairment that caused the transmitted packet to be lost or corrupted. With the bandwidth and processing rate constraints, it is almost impossible to achieve real-time response in the WSN. In order to cope with the demand of the applications, it is important to ensure that the transmission delay in the WSN is always as short as possible. Therefore, there is a need for a fast and efficient protection scheme with the capability to protect the transmitted packet from being lost or corrupted.

Forward error correction (FEC), which is one of the error control and protection techniques in channel coding, has been commonly used in wireless networks including the WSN. However, for multihop WSN, the FEC may not be the best technique for the purpose of protection. The end-to-end error control process of FEC requires a certain number of packets to reach the receiver in order to recover the lost or corrupted packets. Unfortunately, the number of packets arriving at the receiver decreases as the number of hops increases. As a result, the efficiency of FEC has dropped and, subsequently, the packet reception rate has reduced.

This weakness has been coped with through the use of network coding (NC) as a packet protection scheme. This coding technique was first introduced in Ahlswede et al. [3] for the network information flow problem. Based on the concept, NC allows packet processing to be conducted by any node within the network. By exploiting this characteristic, NC has the ability to protect the transmitted packets on a hop-by-hop basis and consequently provide higher protection efficiency compared to FEC in multihop WSN. The packet processing is performed via a linear exclusive-or (XOR) combination of the received packets. With a certain coding strategy, the linear combination process of the packets is expected to provide the optimum amount of redundancy to allow the receiver to successfully recover the lost packets.

Codes are usually characterized by a rate and a distance, which can help one ensure correct data transfer in a particular channel. However, in some cases, the channel characteristics are not known, and yet one likes to improve data transfer without sending excessive data, while maintaining efficient encoding and decoding. Hence, the rateless code has been introduced with rateless characteristic that refers to the ability of the coding technique to produce an unlimited number of encoded packets for the transmission process. The encoded packets are generated through XOR operation of the original uncoded packets, which are chosen uniformly at random based on a certain degree of distribution. The similarity of the process of the rateless coding and NC in terms of XOR operation has motivated the development of the joint rateless-network coding (RNC). Through the combination of both coding techniques, a more efficient protection scheme has been developed. Furthermore, the simple linear XOR operation allows the protection scheme to work fast and consume less power, which significantly benefits the WSN.

The rest of this paper discusses the RNC-based packet protection schemes in the WSN and highlights the limitations of the existing packet protection schemes. The theory of NC and its benefits, including some examples, are elaborated on in Section 2. The description of channel coding is presented in Section 3. Section 4 briefly reviews the idea of erasure codes and several types of it, such as the fountain rateless codes, the LT-code, the raptor code, the online code, the shifted code, and the switched code. The review on the established joint LT-code and NC is provided in Section 6. RNC-based packet protection is discussed in Section 7. Finally, Section 8 summarizes the paper and suggests the direction of the future work regarding the idea of RNC.

2. Network Coding

Network coding is a new paradigm for data transmission over the networks [4]. The applications of NC in wireless networks were described in Fragouli and Soljanin [5]. The main advantages of NC are the reduction in energy consumption and throughput enhancement as described in Suyu et al. [6] and Weiwei et al. [7], respectively. Based on the concept of NC, an intermediate node will combine two or more incoming packets and create one or several output packets to be forwarded rather than simply relay the received packets to the adjacent node [8]. In the subsequent paragraph, the concept of NC will be reviewed using the example as follows.

Based on Figure 2, nodes and are required to exchange data and data , respectively. However, there is no direct connection that can be constructed between both nodes except through node as the intermediate node. As the transmission begins, node transmits data to node in the first transmission, , while node transmits data to node in the second transmission, . According to the conventional method, the transmission will continue with node transmitting data to node in followed by the transmission of data to node in . As illustrated in Figure 3, it can be observed that the process needs four transmissions in order to realize the data exchanging process. However, if NC is used, node will conduct the XOR operation on both data and data in encoding process to form data -xor-. The encoded data is then relayed to nodes and simultaneously in . The original data can be recovered by re-XORing the received encoded data with the initial data hold. The process is elaborated on further in Figure 4.

As previously mentioned, NC has offered energy reduction and throughput improvement in the networks. The next section will discuss how NC is able to reduce energy and improve throughput.

2.1. Energy Benefit and Throughput Enhancement

The example presented in the previous section has shown the energy benefit of NC. To be specific, it can be clearly observed from the example that the total transmissions required in the data exchanging process have been reduced from four to three transmissions after NC is used. This, eventually, will save the energy that is required to conduct the fourth transmission in the normal technique. Although there will be additional energy consumed for the data-combining process, the required processing power is much lower than the transmitting and receiving power [9].

In general, the flow of data in the networks can be analogized as the flow of liquid in a series of linked pipes. The maximum flow of the liquid can be defined as the maximum volume of the liquid that can be supplied into the pipes series per unit time so that the liquid does not overflow. The unit time in the piping system can be described as the number of transmissions. Based on maximum-flow minimum-cut (max-flow min-cut) theorem [10], if the capacity of each link in the network is 1 bit, the maximum flow of the network will be 1 bit. In order to describe the throughput enhancement using NC, the example, as provided in [11], will be used with a slightly different network model.

Referring to the butterfly network depicted in Figure 5, given that nodes and are the source nodes that wish to transmit a 1-bit data and data , respectively, to both receiver nodes, and . Therefore, data can be easily transmitted to both receiver nodes, and , as the flow shown in Figure 6(a). Meanwhile, the flow of the transmissions of data to the same receiver nodes is illustrated in Figure 6(b). From the flows of data and data , the total number of transmissions is equal to eight transmissions. However, if both flows are combined, the bottleneck will appear at node since it has two input links and only one output link. This means that either data or data can be transmitted at any one time as shown in Figure 7. Therefore, the transportation of both data in a single transmission is impossible unless the link capacity between and is increased to 2 bits.

Based on the concept of NC, both data and can be encoded using XOR operation at node and the output data can be transferred to node and so on as normal procedure. The encoded data at node will be decoded by re-XORing both data and data -xor- to obtain data and the same method is carried out at node to obtain data . The flow of the data is illustrated in Figure 8.

Figure 8 shows that both data and data can be transferred in a single flow. In other words, the network now provides the flow for two kinds of data simultaneously, which indicates the increasing flow size of the network from 1 bit to 2 bits. This subsequently improves the throughput of the network. Again, the example also shows the energy saving offered by NC where the number of transmissions has reduced from eight to seven.

2.2. Protection Scheme Using Network Coding

The data protection scheme primarily used NC coupled with the multipath routing approach, which provides resilience to node or link failure and reliable data transmission [12]. The redundancy of transmission in multipath routing is exploited in order to realize the data recovery in underwater sensor networks [13] and in wireless body area networks [14]. On the other hand, the loss recovery has also been implemented in vehicular safety communication as described in Wang et al. [15]. The authors present how to combine multiple packets transmitted by different vehicles into a single transmission and provide the method to optimize the performance of the NC-based recovery scheme.

In Lv et al. [16], NC is used as a protection scheme against single-link failure with no extra protection path required. A protection packet is assigned to the path that is predicted to have lower traffic. This scheme might not be suitable for the transmission of a large size of data, such as multimedia data, since it is expected that the traffic of all paths would be maximized. Hence, there will be no path appropriate to be assigned with a protection packet unless extra protection paths are available. Besides, the mathematical formulation of NC for protection against single-link failure is developed in Muktadir et al. [17]. Based on the scheme, the optimal design of the route is obtained in order to achieve minimal-cost transmission while providing the protection. The optimal formulation only focused on reducing cost (power consumption) while the formulation of packet loss during the transmission has not been developed.

3. Channel Coding

In the practical implementation of data transmission in wireless networks, the transmitted data usually will be affected by the channel impairment, such as noise, interference, and fading. In a nutshell, although the received data is correlated, it may not be the same as the transmitted data due to the effect of the channel condition [18]. Several channel models, such as additive white Gaussian noise (AWGN), binary symmetrical channel (BSC), and binary erasure channel (BEC), are introduced to estimate the channel condition of the networks. Based on the channel model, the research can be conducted in order to identify the suitable techniques to reduce the effect of the error on the transmitted data.

Channel coding is an error control technique in a digital communication system. The error, caused by several reasons, such as noise, interference, and fading, can be reduced via certain coding techniques [19]. Therefore, channel coding has provided two main error control techniques; namely, automatic retransmission query (ARQ) and FEC. Channel coding allows the data to be coded before it is transmitted in such a way that the system will be able to detect any error that occurs and move on to the next procedure whether to retransmit the error data according to the ARQ technique or to correct the error data using the FEC technique. For the larger size of data, the larger number of errors will occur. In the ARQ technique, more retransmission processes will be required, which will cause significant growth in power consumption. For the WSN, this situation will cause the node’s power to be exhausted more quickly; therefore, it must be strictly avoided. The FEC technique allows the receiver to detect and correct the error and omit the need of the data retransmission process, which also caused additional delays to the overall transmission process. Also, it can be observed that the technique has indirectly shifted the processing load that consumes energy and requires more memory space at the sensor nodes (for the retransmission process) to the receiver, which normally has unlimited power, processing capability, and memory.

4. Erasure Channel

The binary erasure channel was introduced by Elias in 1954 as a noisy channel for error-free coding [20]. The channel demonstrates the situation where either the transmitted data reach the receiver correctly or they are considered to be erased or lost during the transmission [21]. Figure 9 depicts the mapping of the input and output set in the BEC where the error is represented clearly at the erased bit (represented by value ). In comparison to the well-known BSC, the error is represented by the flip of bit 1 to 0 and vice versa as illustrated in Figure 10. Basically, it is hard to distinguish which part of the data is errors in BSC since they appear as bits just like the rest of the data. However, once the positions of the errors are located, the error corrections can be done easily by flipping back the error bits. On the other hand, the location of the errors can be easily spotted in the BEC since the erased bits are represented by the unknown bit values [22]. Nevertheless, the error corrections require suitable FEC techniques.

The explanation above can be easily understood from the example illustrated in Figure 11. In that example, the 8-bit packet, , is transmitted over both the BSC and the BEC channels and the received packet is represented by . From the figure, the fourth bit from the right-hand side of has been affected by the channel error. The receiver can directly detect the error on the packet due to the BEC, which is based on the disappearance (erasure) of a bit in the packet, and then the error correcting process can be initiated. Nevertheless, the received packet via the BSC requires the receiver to check the packet condition, such as the parity bits of the packet, in order to detect whether there is an error or not, and if there is, which bit is the error.

Furthermore, the packet erasure channel (PEC) is a class of erasure channel that is a generalization of the BEC, which is applied in a packet-based communication network [23]. The concept of the PEC is almost similar to the BEC; that is, the transmitted packet is received by the receiver intact, or it is known to be lost [24]. In short, the receiver will never find the transmitted packet as a corrupted packet or packet with error. If there is a case where a corrupted packet is received, the receiver will simply drop the packet and treat it as if it has never been received.

5. Erasure Codes

As previously discussed, due to the characteristics of the erasure channel, the transmitted data tend to be erased. Therefore, the FEC techniques corresponding to this channel type should be able to recover the erased bits in the BEC or the erased packets in the PEC. The technique is called erasure codes. The basic concept of erasure codes is illustrated in Figure 12 [22]. From the figure, a total of packets obtained from the fragmented source data are encoded by the source to yield encoded packets. During the transmission, based on PEC characteristics, several encoded packets are lost, and eventually only encoded packets are received by the receiver. The decoding process on the received packet should be able to recover all original packets. The failure of the decoding process to recover all original packets is known to be a decoding error.

5.1. Fountain Rateless Codes

Fountain rateless codes—or simply fountain codes—are a type of erasure code that is applied in the operating network using the erasure channel. The concept of digital fountain codes was introduced in 1998 for reliable distribution of bulk data [25]. These codes are called rateless since they allow the source to produce a possibly infinite amount of encoded packets from a finite amount of original packets [26]. The receiver only collects a sufficient number of encoded packets in order to recover all original packets successfully.

In general, the encoded packets are produced from one or a combination of several original packets. Each generated encoded packet will be equipped with information on which input packets are used to generate it. This information could be in the form of a header attached to the packet or the one obtained from time synchronization between the transmitter and the receiver, or it also might be obtained through other application-dependent means. In order to work in practice, the fountain code should have the following characteristics.(a)The encoder and decoder are fast.(b)The decoder is able to regenerate a total number of original packets close to optimal with high probability from any set of encoded packets.

The fountain code is optimal if input symbols can be recovered by using output symbols. A fountain code with the characteristic stated above is called the fountain code universal or the LT-code [27]. The concept of the LT-code will be described further in the next section. Meanwhile, other types of fountain code (online, raptor, shifted, and switched) also will be described in brief in Sections 5.3 to 5.6.

5.2. LT-Code

The LT-code, which was introduced by Luby in 2002, is a class of erasure code [27]. It is the first realization of near optimal erasure-correcting codes since the original packets can be recovered from the minimum possible number of encoded packets, which are close to the number of original packets themselves. As a class of fountain code, LT-code is rateless where the number of encoded packets that can be generated from the original packets is potentially limitless. With the availability of slightly larger encoded packets, the exact copy of original packets can be regenerated.

Research on the RNC has shown that the LT-code has been commonly used compared to other types of rateless code. The reason is the similarities in terms of linear XOR operation between the LT-code and the NC that allows the integration between them to be possible [28]. Further discussion on the previous works involving the joint LT-code and the NC is provided in Section 6.

The encoding process of the LT-code is quite simple. Each encoded packet is produced independently of each other [29]. The flow of the encoding process [27] is described this way.(i)The data stream is divided into packets that are called original packets of the same length.(ii)The degree of the packet is chosen randomly from degree distribution in the range between 1 and .(iii)A total of distinct original packets out of packets are chosen uniformly at random.(iv)The encoded packet is produced from the XOR operation of the chosen packets.(v)The encoded packet will be equipped with certain information such as(a)the number of original packets, ,(b)the degree of the encoded packet,(c)the list of indices of the chosen original packets.(vi)The same process continues until acknowledgement is received to indicate that the message has been received and decoded without error or until a certain stopping condition is satisfied.

The general decoding process [30] is given as follows.(i)The encoded packet of degree 1 (encoded packets that are encoded with only one original packet) is identified and it is called the check node.(ii)With the information possessed by the identified check node, the original packet can be determined and recovered.(iii)The value of the recovered original packet will be XOR-ed to the other encoded packets associated with the recovered original packet, if any.(iv)The same process is repeated until all input symbols are determined.

There is certainly a possibility that the encoded packet of degree 1 may not be available. If such a situation occurs, the decoder needs to collect more encoded packets until the next degree 1 packet is obtained and, subsequently, the decoding process can continue. In the case where the source stops transmitting the encoded packet due to the satisfaction of the stopping condition, the decoding process has failed. Figure 13 depicts the example of the decoding process of the LT-code with a number of original packets, , and encoded packets, [31]. In the example, it is assumed that each packet is a single bit.

Based on Figure 13, the four encoded packets, , , ,  and , received have a value of 1, 0, 1, and , respectively. From step (i) of the decoding process, the encoded packet with degree 1 is . For step (ii), the value of the original packet associated with it, which is , is recovered as illustrated in Figure 13(b). Based on step (iii) then, any encoded packets associated with are XOR-ed and the line connecting and with is omitted as shown in Figure 13(c). Afterward, steps (i) to (iii) are repeated for any degree 1 packet found until all original packets are recovered as depicted in Figure 13(f). If the condition shown in Figure 13(c) does not occur, that is, there is no degree 1 packet, the decoder needs to wait for another packet reception until the received packet is the degree 1 encoded packet.

In designing an efficient LT-code, a proper design of the degree distribution is crucial since it significantly affects the performance of the code. Two primary aims in a good design of the degree distribution are, firstly, to ensure the minimum possible number of encoded packets required to recover all original packets successfully. Secondly, the average of the degree of all encoded packets produced is kept as low as possible. Therefore, Luby’s work in the original paper of the LT-code [27] has used Soliton distribution in order to achieve the aims. In the paper, two types of Soliton distribution were described, namely, ideal Soliton distribution (ISD) and robust Soliton distribution (RSD). The following equation defines the ISD:

Based on MacKay’s paper [30], the disadvantage of the ISD is that the encoded packets of degree 1 may disappear at a certain point, which causes the failure of the decoding process. In addition, there is a possibility that certain original symbols are not being selected during the encoding process that causes the original packets to be lost forever. Due to this problem, the RSD, which is the improvement of the ISD, is used. The definition of the RSD is given subsequently. Given that is as follows:

then, the definition of is given by

Then, RSD is defined aswhere

Basically, the RSD is the derivative of the ISD with two additional parameters: and . Given that is a constant where , it can be viewed as a free parameter in practice, which provides good results for values smaller than 1 [30]. In this paper, the LT-code is classified as the late decoding type of coding scheme since most of the original packets are recovered after a sufficiently large amount of encoded packets are collected by the receiver. This decoding behaviour is primarily due to the degree distribution itself. The opposite of a late decoding characteristic is early decoding, which means that the decoding process starts immediately after a smaller number of encoded packets (around two) are received. For integration with NC, codes with early decoding characteristic will be more effective as the original packets can be recovered at the intermediate node more quickly, allowing it to produce encoded packets from the original one. This can reduce the risk of altering RSD properties, which will also be discussed further in Section 6.

5.3. Raptor Code

The raptor code is a class of fountain code with linear time encoding and decoding [32]. Basically, the rateless characteristic of the raptor code is similar to the LT-code where original packets will be encoded to a potentially limitless sequence of output encoded packets, so the receiver can collect a sufficient number of encoded packets to recover all original packets. Normally, the number of encoded packets collected is slightly larger than and if it is compared to the LT-code, the raptor code provides lower overhead [30, 32]. Also, the code can be either systematic or nonsystematic [32]. There are some research studies that use the raptor code in the WSN [33, 34]. In Thomos and Frossard [36], the raptor code is used for joint source-channel coding in the WSN. Nevertheless, very few studies have been done in joining the raptor code with the NC. One of them is presented in Deligiannis et al. [35] where the authors have proposed the optimum degree of the raptor code in order to improve the performance of raptor network coding from their previous work [37].

The following are some discussions on the encoding process of the raptor code based on Shokrollahi’s paper [32]. Let be a linear code of block with length and dimension , and let be a degree of distribution. The raptor code now contains parameters (), which is built up from the LT-code with distribution and code , which is called the precode of the raptor code. Then, input original packets are used to generate the code word of consisting of intermediate packets. The intermediate symbols will be used by the LT-code to produce the output packets (raptor-code output) with distribution . Figure 14 depicts the block diagram of the raptor code encoder.

The decoding algorithm of the raptor code is done based on the decoder of the LT-code and the precode used. Although the complexity in terms of cost (arithmetic operations) can be lower than the LT-code, the implementation of the decoding algorithm for the raptor code is quite complex due to multiple decoding processes [38]. However, the main disadvantage of the raptor code is that the lower bound of the total overhead is the overhead of the precode [39]. Therefore, a suitable selection of precode will determine the performance of the raptor code such as the irregular low-density parity-check code in [32].

5.4. Online Code

The online code is a class of fountain rateless codes, which works based on two layers of packet processing. The online code is characterized using three parameters as provided in Maymounkov and Mazieres [40]. Based on their paper, the number of message blocks is represented by while parameter is called the degree of suboptimality, which is defined as a value that allows all message blocks to be decoded with high probability using output blocks. The two-layer packet processing refers to the use of both the inner code and the outer code to encode and decode the message blocks. From the literature study, the online code seems less popular in the research of the rateless erasure code especially involving the WSN. This is possibly due to its complex decoding algorithm compared to the LT-code and the raptor code [41], which makes it unsuitable for the WSN environment.

The encoding process, based on the original paper of the online code [42], can be referred to as follows. Firstly, a number of blocks called auxiliary blocks are produced through an outer encoding process and they are appended to the original message block. This forms the composite blocks of total blocks. The property of the composite blocks is that any fraction of the blocks is sufficient to recover the original message blocks. The process is continued with an inner encoding process. This process is able to produce an infinite number of the output symbols called check blocks from the rateless encoding of the composite blocks. Based on Maymounkov and Mazieres [40], the decoding process of the online code is simply the inverse of the encoding procedure. This is done by decoding the received check block to obtain a total of composite blocks. Using the recovered composite blocks, the original message block can be regenerated. The overall design of the online code is depicted in Figure 15 [40].

5.5. Shifted Code

In the original LT-code, the source produces an unlimited number of encoded packets without any knowledge of the number of original packets that have been recovered at the destination. The encoded packets from the original packets that are already recovered at the destination are redundant since they provide no new information regarding the unknown original packets. Basically, it is a significant advantage for resource constrained WSN if the redundant packets can be reduced. Therefore, from partial information on the number of original packets recovered at the destination, Agarwal et al. [43] have proposed the shifted RSD (SRSD). This distribution works by shifting the RSD to recalibrate the degree of the encoded packets produced as the number of original packets recovered at the destination change. The SRSD can be defined aswhere is the number of original packets while is the number of original packets recovered at the destination.

For comparison, both the original RSD and the SRSD are plotted as depicted in Figure 16. The parameter settings for the plotted distributions are , , , and for the SRSD. This plot is based on the example in Agarwal et al. [43]. The simulation results, which are also presented in the paper, indicate that the performance of the SRSD has a reduced number of redundant packets than the original RSD. The mathematical analysis of the distribution is also provided.

5.6. Switched Code

The switched code is a rateless code that was introduced in Kadi and Al Agha [44], which outperforms the LT-code. This code was adapted for NC, which was used for a broadcasting application in an ad hoc network. The backbone of the code is its degree distribution called switched distribution (SD). The concept behind the SD is that the source is able to switch between two degree distributions based on a certain number of encoded packets transmitted. Basically, the switched code process is similar to the LT-code encoding and decoding process except for the degree distribution used.

The main aim of the switched code is to reduce the delay of the decoding and reencoding process at each node in the network since the code is applied in a broadcasting application. In order to achieve the aim, it is important to ensure that each node is able to recover the original packets faster so the reencoding can be conducted and produce fresh encoded packets for transmission to the subsequent adjacent node. The reencoding process that involves the encoded packets should be avoided since it will increase the degree of the newly produced encoded packets [44]. Unlike the LT-code, the decoding behaviour of switched code is classified as early decoding since each node in the network is able to recover the original packets as soon as the first encoded packet is received; more original packets can be recovered as the node collects more encoded packets. This behaviour is apprehended due to the distribution used. Due to the early decoding characteristic, the switched code may improve RNC efficiency.

To be specific, the two distributions used in the SD are the binary exponential distribution (BED) and the SRSD. The SD can be defined as follows:where is the BED while is the SRSD. Based on the mathematical definition of the SD, it can be concluded that if the number of transmitted packet, less or equal to , the BED will be used to obtain the degree of the encoded packet; otherwise, the SRSD will be used instead. The exponential distribution, BED, can be defined as follows:

The proof that BED is a distribution which is provided in the original paper [44], including the detailed analysis of the SD. The performance, in terms of the number of transmissions, the packet delay, the buffer size, and many more as provided in the paper, indicates that the switched code outperforms other coding schemes.

6. LT Network Code

According to the concept of NC and the LT-code, it can be observed that there is a similarity in the packet combining part of their encoding process. From this similarity, the benefit of both NC and the LT-code can be exploited if both coding schemes are combined to form joint LT network coding (LTNC). To the best of the knowledge, the first realization of the LTNC is presented in Pakzad et al. [45]. Since then, many research studies related to joint LT-code and NC have been done for several applications including WSN.

Based on the literature, most types of the LTNCs developed share a common condition for the successful regeneration of packets; that is, the encoded packets that arrive at the receiver need to preserve the statistical property of the LT-code based on the Soliton distribution. This is a crucial condition since it ensures that the simple decoding process using the belief propagation technique, which is used in the normal LT-code, can be applied. In some studies [4650], the LTNCs are developed by ensuring that the NC process at the relay (or intermediate) nodes in the network produces the encoded packets of degree according to the Soliton distribution; that is, each node knows the degree distribution used. However, it is hard to implement such a condition because not all original packets are available at the relay nodes. Furthermore, randomly choosing any packets available in the node’s buffer will eliminate the uniform distribution characteristics of the original packets. Therefore, several mechanisms have been proposed in order to achieve this purpose. In Champel et al. [46], the variance of selection of the original packets to be encoded is ensured to be as low as possible. Hence, the relay node needs to analyse each of the received packets and remember the occurrences of the original packets associated with the received packets. Then, it will try to exchange the original packets with high occurrence with ones that have less occurrence during the encoding process of a new encoded packet. However, this method is complex and requires a large memory size at each relay node in order to store the received packets although the memory size is not necessarily as large as . In Von Solms and Helberg [48], the authors proposed the hybrid-LTNC, which is the combination of the LTNC introduced in Champel et al. [46] and the random linear network coding (RLNC) in both studies by Tracey et al. [51, 52]. In the proposed hybrid-LTNC, the nodes that appear to be one hop from the receiver will conduct the LTNC on the received packets. Meanwhile, the RLNC, which is known to be simpler than the LTNC, is carried out by the other relay nodes. Since the number of relay nodes that conducted LTNC is lower, the complexity of the network has been lowered.

Besides, the LTNC is developed with a technique where the NC is only conducted based on certain probability or otherwise, the nodes will simply forward the received packets [49, 50, 53]. Somehow, this method has dissolved an important issue of the LTNC, which was the vanishing of the lower degree encoded packets (degree 1 and degree 2) due to the NC process conducted at the relay nodes. It is well known that the lower degree packets are crucial in determining the success of the decoding process, especially the degree 1 packets since the unavailability of a degree 1 packet will cause the decoding process to halt. However, this method most probably will create nonuniform distribution of the original packets and eventually affect the coding overhead. The reason for this occurrence is simple. If the distribution of the original packets is not uniform, the original packet with a high probability will be created more frequently and increase redundant information. Meanwhile, the one with a lower probability will be less involved in the encoding process, which eventually causes the packet to disappear.

In Liau et al. [53], however, instead of using the original Soliton distribution, the authors have introduced a Soliton-like degree distribution. It is used by the relay node of the Y-network in order to determine the degree of the packet to be combined with the original packets of two uncorrelated sources. The distribution is developed based on Soliton-like distribution properties. On the other hand, Lv et al. [28] proposed a different approach in providing LT encoded packets. Instead of producing them at the source, the LT-packets are generated gradually as the packets move through the network by applying the NC at each relay node with certain probability. Hence, the encoded packets received by the decoder do not conform to the property of the Soliton distribution, although they can be decoded.

7. Discussion

In this section, the optimal solution in realizing the protection scheme for data transfer in WSNs will be discussed. The discussions are limited to the transmission process in the PEC where the transmission packet is assumed to be intact or otherwise will be considered as lost. The protection scheme includes packet encoding, recovery, and decoding processes. It is common to have an end-to-end protection as offered by the FEC (channel coding) where the recovery process is conducted at the destination, which normally has no constraint and limitation. However, the performance of the protection scheme can be improved by having the recovery process hop-by-hop where all intermediate nodes are able to do the recovery process when packet loss is detected. This scheme benefits the network significantly especially for the WSNs that involve multihop network topology.

Network coding can be conducted at a certain node that has at least two incoming and one outgoing links. The packet-combining process can be done on the packets received from the incoming links and the combined packet will be transmitted to the subsequent node through the outgoing link. On the other hand, the node that has a single incoming and outgoing link is still able to apply the NC. This can be done by combining the received packet with the packet buffered by the node, which is obtained from the previous transmission. In general, the protection scheme using NC can be realized in a multipath network. Although the network is a single-source single-destination, the transmission of packets through multiple paths allows the protection scheme to protect the transmitted packets against packet loss or path/node failure by using the other packets from other paths to regenerate the lost packets, if any.

In order to ensure the NC packet can be successfully decoded at the destination, the number of original packets combined in one encoded packet (combination degree) must be kept lower. Otherwise, the amount of encoded packets required to ensure a successful decoding process will increase exponentially, which, subsequently, will cause the decoding process to be too complex. Therefore, the involvement of the rateless code in the protection scheme will allow the encoder to combine a slightly larger number of original packets with a certain degree of distribution without degrading the packet-decoding capability. The objective of the RNC for protection purpose is to improve the protection capability of the NC by providing a double protection stage, that is, recovery through both the NC and the rateless code. Both of these codes have a common packet-processing method, which is the packet-combining process via XOR operation.

In considering the PEC as the transmission channel, the fountain erasure code can be used with NC in WSN due to the rateless characteristic. This characteristic significantly benefits the system in terms of lower overhead especially the one with bulky data (multimedia data) transmission. The reason is that the fountain code provides a near optimal coding as the number of packets approaches infinity. Instead of the raptor code that requires a complex multiple-stage decoding process, the LT-code, which is the first realization of the fountain code, is the most commonly used for the RNC since it is simple to be realized, especially for the WSN environment. By collecting slightly more encoded packets, all original packets can be recovered with high probability.

From the previously mentioned RNC schemes, it can be summarized that none of them apply the recovery process at the intermediate node, although NC is allowed. However, the recovery process is only applied at the decoder. In general, to improve the efficiency of the protection scheme, all intermediate nodes must be able to apply the recovery process, which can be done using NC as soon as packet loss is detected. However, the targeted efficiency may not be able to be achieved since NC may cause the property of Soliton distribution of the LT-code to vanish.

Moreover, the overhead in the LT-code is another issue that must be considered. Basically, the overhead is caused by the redundancy of the transmitted encoded packets generated by the encoder that has no knowledge about what is happening at the decoder side. Based on the original LT-code, the received encoded packet is redundant when there is no new original packet recovered from it after the decoding process. Therefore, one of the ways to improve the performance of the original LT-code is by using the enhanced RSD, which is called shifted distribution. The shifted distribution is able to reduce the redundancy of the transmitted packets by altering the degree of encoded packets produced by the encoder based on the number of original packets that have been successfully recovered at the decoder. The shifted distribution is also improved with the capability of estimating the number of original packets that have been decoded without requiring acknowledgement from the decoder.

8. Conclusion

In this paper, the review on the theories and previous works related to the area of NC and rateless coding in WSN has been provided. The review focuses on NC and rateless-code-based protection schemes that are able to reduce the number of packet losses and subsequently increase the packet reception rate. The main concept of NC is described in depth, including its ability to improve network capacity and reduce power consumption. This has significant benefits for WSN, which is known to have limited power and bandwidth. Besides, the review also outlines several rateless codes, including their benefits and drawbacks toward the improvement of the quality of service of data transmission. Also, the concepts of RNC from several previous studies are discussed. With rateless and low complexity characteristics, the rateless code is used to support the NC in increasing the packet reception rate through end-to-end packet recovery.

For future studies, the protection scheme can be realized by integrating the NC, the rateless code, and the multipath network. By exploiting the benefits of those three schemes, the protection scheme can provide hop-by-hop packet protection. The idea is that when packet loss is detected by any node in the network, the NC will be used to recover the lost packet by using the packets available at the nearest two or more nodes from other paths. The purpose of the rateless code is to provide end-to-end packet recovery in case the NC fails to recover the lost packets in the network.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Ministry of Education (MOE), Universiti Teknologi Malaysia (UTM), and Research Management Centre (RMC) for the sponsorship and Telematics Research Group (TRG) for the full support and good advice. This work is supported under Grant Q.J130000.2723.01K18.