Abstract

Network coding was introduced by Ahlswede et al. in a pioneering work in 2000. This paradigm encompasses coding and retransmission of messages at the intermediate nodes of the network. In contrast with traditional store-and-forward networking, network coding increases the throughput and the robustness of the transmission. Linear network coding is a practical implementation of this new paradigm covered by several research works that include rate characterization, error-protection coding, and construction of codes. Especially determining the coding characteristics has its importance in providing the premise for an efficient transmission. In this paper, we review the recent breakthroughs in linear network coding for acyclic networks with a survey of code constructions literature. Deterministic construction algorithms and randomized procedures are presented for traditional network coding and for network-control network coding.

1. Introduction

The theoretical foundations of network coding were first provided by Ahlswede et al. [1]. The idea originated in the sphere of satellite communications, in the scenario in Figure 1. A satellite acts as relay node for the bidirectional communication between two ground stations in nonline-of-sight. The relay receives the data from the ground stations, and it broadcasts a function of the two messages (Figure 1(b)). Each station can decode the respective message from the transmitted and the received information, with a remarkable saving of bandwidth.

Ahlswede et al. applied the idea of coding at intermediate nodes to general multinode networks, referring to it as the network information flow problem. Network coding allows network nodes to perform decoding and reencoding of the received information, resulting in the retransmission of messages that are a function of the incoming messages, as opposed to traditional routing (which can be regarded as a special case of network coding). The network equivalent of the satellite example is the butterfly network (Figure 2). The benefit of coding at the bottleneck edge is the increased rate with respect to the case of traditional networking, in which the intermediate node can only forward one packet at a time. The major achievement of network coding is, thus, the possibility of reaching the full theoretical capacity of the network with the transmission rate in multiple-source multicast scenarios [1].

Network coding has generated a number of fertile theoretical and applicative studies, and it is foreseen to be successfully implemented in future applications. In wireless and cognitive radio networks, the physical superposition of the signals can be seen as a benefit instead of an interference and exploited for coding at physical level [2, 3]. The communication can be protected against attacks from malicious nodes [4], eavesdropping entities [5, 6], and impairments such as noise and information losses [7, 8] thanks to the property of the network of acting as a coding operator. In peer-to-peer networks, the distribution of a number of encoded versions of the source data avoids the well-known problem of the missing block at the end of the download [9, 10].

The perspective of application of network coding in several fields follows the theoretical studies that laid the foundations of this new transmission paradigm. The basics of network coding are reviewed in this paper, ranging from transmission models to the vast topic of code construction based on information vectors. Determining the coding operations for specific networks allows transmission with desired properties, especially in the case of coding for error control. The approaches based on vector transmission have been extensively studied, and a large variety of code construction methods have been proposed.

Linear codes on acyclic networks is the main topic of this paper due to the large landscape of recent and ongoing research activities. Simple transmission models can be formulated thanks to the mathematical tools available to treat linear algebraic problems. These models are used to help dealing with the transmission from the point of view of a coding operator. Coherent and noncoherent transmission models are distinguished whether knowledge of the network transfer characteristic is available at the transmitter and receiver sides or not. Packet-based and symbol-based transmission is feasible in case of coherent network coding, whereas noncoherent models need a packet-based transmission to cope with an oblivious model of the network.

The construction of the network code is a problem of prior setup of the network before transmission. Two main distinctions of construction approaches are made. Deterministic algorithms are considered when full knowledge of the network is available and infrequent topology changes are expected. Randomized approaches are considered to construct the network code in a distributed way.

Coding against noise, packet losses and attacks from malicious entities extend classic theory to channel operators by considering a physical spatial dimension for coding. Models for transmission with error control coding and constructions of the network code are discussed for coherent and noncoherent models of transmission.

The paper is organized as follows. In Section 2, we review linear network coding and the recent developments in terms of the mathematical tools to comprehend linear network codes. Section 3 presents deterministic and randomized methods for network code construction. Network error correction theory and construction algorithms for coherent and noncoherent transmission are presented in Section 4. Section 5 concludes the paper.

2. Linearly Solvable Networks

The formulation of network coding has been studied as the problem of determining the existence of the coding functions (solvability of the code). As presented in [1], no assumption was made about the nature of the coding functions. However, linear coding allows to use a plethora of well-known mathematical tools that makes it appealing from the “engineeristic” point of view.

Consider a communication network as a directed graph , composed of a vertex set and a directed edge set . For coding purposes, all edges in the graph have integer capacity. One or more edges with unit capacity are considered between the nodes connected by higher rate links.

Linear network coding was introduced by Li et al. [11]. In linear network coding, the transmitted messages are linear combinations of the messages at the input edges of each node (Figure 3). A linear code multicast (LCM) is a set of linear encoding functions corresponding to each edge of , which can correctly carry an information flow from a set of sources to a set of sink nodes .

Given a nonsource node with input edges and output nodes , we refer to as and . Let be the messages in the incoming edges to , then the message transmitted on edge is

The coding coefficients constitute the local encoding kernel of an edge , for all , whereas the global encoding kernel of an edge is the result of local coding at all nodes in upstream-to-downstream order, until edge :

Typically in delay-free networks, messages of bits are considered as symbols in a finite field of size , for example, a Galois field of size or a ring of polynomials of maximum degree . In networks with delays, we consider similar algebra, based on a ring of polynomials whose coefficients are rational functions in a delay variable [12].

2.1. Algebraic Approach to Acyclic Networks

Let's consider a network without cycles and with instantaneous transmission. The basic theory can be then extended to networks with delayed transmission and cycles. The algebraic approach to the transfer characteristic was proposed by Koetter and Médard [12].

Linear coding across the network induces a linear transformation on the source vector message. An adjacency matrix is defined as an matrix

The network transform characteristic can be then modeled by a system matrix induced by the linear code [12]

where is a source-symbol routing matrix, and is the routing matrix at the receivers. Consider a network with a max-flow to each destination. A source codebook is a vector space spanned by the LCM. The source bitstream is arranged in a vector codeword , with symbols . The codeword received by a sink node is a vector of messages , obtained by means of the network transformation as [12]

The network operates a transformation, that projects the source codebook into the receiver codebook by the system matrix . The receiver decodes the source messages by inverting such transformation.

Successful transmission is achieved if the encoding vectors span the coding space. In order to transmit a correct code and span, the coding space at the sinks, the paths must retain linear independence. In other words, each path has to be a base for the coding space.

The coefficients of the source-to-network routing matrix , transfer matrix , and network-to-sink routing matrix have to be determined. In order for the code to be correct and span, the coding space at the sinks, the system matrix has to have rank equal to .

In multisource multicast scenarios, the system matrix takes the form of a composite matrix in which each submatrix is a system matrix relative to the unicast transmission from each source to each sink. The rank of this matrix has to be equal to the respective source-to-sink flow. Determining the coefficients is regarded as the problem of linear code construction. Randomized algorithms as well as deterministic and iterative construction algorithms have been proposed and are discussed in Section 3.

If the code projected at the receivers has length bigger than the rate and it is a minimum distance separable (MDS) code, the codewords at the receivers have minimum distance . Network error correction (NEC) was proposed to use the network transfer characteristic for error control purposes [7, 8]. Additional property are required for in case of NEC, as discussed in Section 4.

2.2. Properties of a Linear Network Code

The existence of an LCM on a given network depends on the network topology, the number of edge-disjoint paths from each source to each receiver, and the base field size . Linear network coding was demonstrated to be sufficient to reach the max-flow rate in single-source multicast networks. In other words, an -dimensional LCM always exists if the base field is large enough [11, 13]. The lower bound for the alphabet size in single-source scenarios was stated to be , where is the number of destination nodes [14]. Deterministic search of the coefficients might need a bigger field size because of the particular method used for the construction; for example, the preservative approach in [15] needs a field with size . The condition is not tight for the existence of the code, but it is necessary for the algorithm to terminate successfully. Langberg et al. have suggested that the minimum number of coding-capable nodes that can sustain a flow to destinations is and has demonstrated that finding this number as well as the minimum required field size is both NP-complete problems [16].

The first deterministic algorithm for code search was presented by Li et al. [11]. Koetter and Médard proposed an algebraic solution in polynomial form [12]. Fragouli et al. proposed a solution of network coding translating it to a graph coloring problem, where a number of known tools from discrete mathematics can be used for the code construction [17]. A class of deterministic algorithm code construction for single-source networks has then originated inspired by the flow path approach by Li et al. as well as randomized criteria which are both discussed in Section 3.

2.2.1. Multiple Sources

Most of the results about linear network coding are referred to a single-source scenario, due to the sufficiency of linear codes to reach the network capacity in single-source multicast. This result subsumes the sufficiency of linear codes for multiple flows generated at the same source node [18].

A. R. Lehman and E. Lehman provided a taxonomy of network information flows distinguishing between problems that do not benefit from network coding and problems where network coding increases the throughput [14]. Among the latter class, it was identified that a series of network where linear codes that solve the flow problem do not exist. Further studies on network coding with tools such as matroids and guessing games claim similar results about linear solvability of networks. Non-Shannon-type information inequalities were introduced by Zhang and Yeung in 1998 and were used to calculate the capacity of a network, demonstrating that such capacity cannot be expressed with Shannon-type inequalities [19]. With this premise, Dougherty et al. studied network coding using matroids theory and demonstrated the limits of linear codes, claiming a gap in the maximum linear coding capacity [2022].

A similar result was demonstrated in the context of graph entropy. The solvability of specific cases was demonstrated by Riis by solving the network coding problem with a guessing game approach [23, 24]. It follows from these works that linear coding fails in solving multiple-source network coding problems. The capacity region of generic networks was studied by Song et al. who gave inner and outer bounds [25]. Successively, the admissible coding region was fully characterized by refining the capacity bounds [26].

Network coding for correlated information sources can be seen as a form of distributed source coding [27]. The channel compresses the sources by means of network coding with a generalization of multilevel diversity coding [28]. The network can be seen as a Slepian-Wolf coder and achieves higher throughput in the general case than separate source and network coding [29].

Few deterministic algorithms cope with the multiple-source multicast code construction. Still, efficient construction can be achieved with algorithms for generic codes [11, 13] or with randomized routines [30]. Linear correlation is indeed exploited by random linear codes, as discussed in [31]. A randomized approach based on matroids theory that encompasses multiple sources was presented in [32].

2.2.2. Networks with Cycles

Although most of the network coding literature is dedicated to acyclic networks, practical applications need to cope with possible cycles in the network. The instantaneous model can be reformulated from single-symbol network coding to symbols pipelines as in convolutional coding. The algebra behind the model is based on polynomial rings [12]. Rational power series are used on finite fields as generalized linear coding kernels. Global encoding kernels can be written by resemblance with  (2)

where the local kernel , is the delay from to . A matrix structure for the transfer characteristic can be described in agreement with the algebraic formulation of acyclic networks.

Convolutional codes and some construction algorithms were also studied by Erez and Feder [33], by Huang et al. [34] and by Barbero and Ytrehus [35]

A general theoretic model was provided by Li and Sun [36]. They proposed a general coding framework in terms of discrete valuation rings (DVR). Their case subsumes convolutional code as a particular case. Convolutional codes for error correction were recently presented by Prasad and Rajan [37, 38]

LCM in acyclic networks being well known, in the next Section, we discuss construction algorithms for the acyclic case only.

3. Construction of Linear Network Codes

Code construction techniques for LCM in acyclic networks can be classified by means of different aspects (see Table 1). One fundamental distinction on the transmission model is whether the sender or receiver have knowledge of the network coding functions. The model of network coding can be then considered coherent or noncoherent. Coherent network coding assumes that coding and decoding happen with full knowledge of the network transfer characteristic. This can happen if the code is constructed in a centralized manner, where a supernode or a central authority imposes the encoding kernels to the rest of the network. Deterministic and semirandomized approaches are feasible with centralized construction, given that in both cases the central authority ensures that the code is correct for the network. Packet-based and symbol-based communication is feasible with deterministic approaches.

In case of noncoherent modeling, the network topology and the channel transfer characteristic are not known neither by the receiver or the transmitter. Randomized approaches are implemented in a distributed way, where each node randomly combines the incoming messages before retransmission. Randomized approaches need packet-based communication, since the receiver builds a decoding matrix based on the information delivered implicitly in the packets.

Linear codes and matroids have been an active topic of research. Based on a theory of linear solvability equivalence, any linearly solvable network can naturally induce a representable network matroid. For code construction through matroids, the use of linear algebra and projective planes has been proposed [41] as well as the approach of matrix completion [40].

We present now some algorithms for network code construction, divided by deterministic and randomized approaches.

3.1. Deterministic Construction

Two main approaches to coherent construction have been discussed in literature [42]. A first approach called flow path was firstly proposed for generic codes by Li et al. and later improved for multicast by Jaggi et al. in their preservative design [15]. Another approach based on matroids theory and matrix completion was proposed by Harvey et al. [40].

The approach of Li et al. achieves generic network codes, for example, yields to linear independence of the coding vectors among the largest possible set of edges. The algorithm of Li et al. is greedy and not computationally efficient [11].

Jaggi et al.'s algorithm [15] constructs a single-source LCM which achieves linear independence of the coding vectors for a specific set of nodes with a smaller field size than previous implementations (see Table 2). This preservative approach is so far the most followed by deterministic construction algorithms. The algorithm achieves linear independence of the global encoding vectors of edges on the paths from the source to each sink, so that they always form a base for the coding space. It considers the edges one at a time in topological order for the calculation of the local encoding vectors and ensures the preservation of linear independence at all steps. To choose the local encoding kernels, a semirandomized procedure chooses randomly the vectors and test the linear independence–-linear information flow (LIF). A completely deterministic choice of local encoding kernels is also proposed (Deterministic LIF, DLIF).

In the same work, a fast routine is also presented, in which, breaking the flow path criterion, the coefficients of all edges are chosen randomly [15]. As opposed to distributed randomized approaches, the rank of the transfer matrix is checked by the central entity before transmission. In case of code failure, the routine is performed again. Success probabilities of such routines are discussed in the next section and in Table 3. The total execution time for the three algorithms is compared in Table 2.

Jaggi et al. also present a variant of the preservative algorithm, which considers path failures [15]. The idea is to consider artificial edges feeding the nodes with random symbols, to simulate path errors. All failure patterns are checked during the design, at the cost of an exponential increase of complexity as the network grows in size. More efficient robust designs are considered in Section 4.

Langberg et al. presented an algorithm that reduces the network to an equivalent minimal set of encoding nodes and applies the preservative design on the equivalent graph [16]. The result is a less time-consuming algorithm with the same validity of an LCM calculated in a complete network.

An algorithm for code construction based on matroids theory is presented by Harvey et al. A problem of independent matching of matroids that applied to transfer matrices of the network is proposed, followed by a maximum-rank completion that gives then rank equal to to [40]. In this way, the algorithm can be also applied to multiple-source multicast problems as opposed to the aforementioned algorithms which work for single-source networks. The procedure is terminated in less time than Jaggi et al.'s algorithms and requires a smaller base field [40]. The complexity of the algorithm in the single-source multicast version is compared with the other algorithms in Table 2.

3.2. Randomized Construction

When the network topology is unknown or variable, randomized approaches turn out to be the best solution. Packet-based transmission is necessary for the receiver to deduce the network code. The choice of the coding coefficients is performed autonomously by each node for the respective outgoing edges. The receivers are eventually able to decode the message by deducing the characteristic from the incoming packets. Decoding is done by means of two possible techniques. With subspace coding at the source, the receivers can examine the space spanned by the symbols across the packets and deduce the source space. The source may instead attach a series of 1s in turn to each packet header and 0s to the others. Namely, the input message can be expressed as , where is a matrix whose rows are packets of symbols. The receiver can then extrapolate the the global encoding kernels from the packet headers [45].

Necessary condition for the construction of the decoding matrix is that the network code spans the coding space. Ho et al. stated that a sufficiently large base field is enough to ensure the existence of the code and the possibility of successfully decoding the transmission [31]. The probabilities of successful randomized construction and the required field size for the existence of such code are resumed in Table 3. Table 3 also shows the probability of generating a correct network code in presence of network degradation.

A correct network code in presence of link failures or errors only exists if the source codebook has a degree of redundancy , for all , where [43]. Error correction via network coding is presented in the next section, where we also analyze some evolutions of the aforementioned algorithms that construct codes for error protection.

4. Network Error Correction

The transmission in the network can be jeopardized by the injection of errors or erasure of symbols. Errors are caused by channel noise or by the insertion of extraneous messages by malicious nodes. Loss of packets can happen because of traffic jams in the network.

Traditional error correcting codes add redundancy to the transmission in the time domain. The network itself offers another mechanism for error correction by adding redundancy in the spatial domain. Network error correction (NEC) was proposed by Cai and Yeung, to drive network coding mechanisms to recover erroneous symbols as well as lost packets with network error-correcting codes [7, 8]. NEC extends all the knowledge of classic coding theory, such as coding distance, weight measures, and coding bounds including the network as a coding operator. Normal network coding can be regarded as a special case of NEC without error control properties. The formulation of the problems of existence and construction of the network code can be extended to the NEC case.

Ho et al. presented a theoretical framework of network failure management [46]. A distinction between link failure and node failure is made, the difference residing in losing all the links from a faulty node, and consequently all the paths through that node. NEC in general includes both possibilities by considering the potential loss of a number of links. They present then two recovery schemes: A receiver-based recovery scheme, in which, under loss of information flow, each receiver can react and recover the lost information, and a network-wide recovery scheme, in which all nodes contribute to the recovery procedure [46]. Network coding against errors usually achieves a receiver-centric error correction.

The algebraic model of transmission can be extended by considering errors as random alterations of the symbols on the edges and erasures as symbol cancellations. This alteration is considered as an additive error vector as

The error pattern of is a vector with unitary components in the corresponding nonzero components of . In the following, we present bounds, weights, and coding distance definitions as well as some construction algorithms for NEC codes.

We consider a network with a max flow equal to to each destination . The source codebook at the source can be redundant, that is, it is an -dimensional subspace of the vector space spanned by an LCM. The coding space projected at the receiver retains the dimensions in presence of network degradations under certain conditions and thus support the transmission at a controllable error rate, thanks to the NEC code. A code is defined as l-error correcting if each destination can successfully recover the source message in presence of at most l total errors. Additionally the existence of linear MDS codes allows superior control over error injections. The code characteristics are discussed in the next section.

As for normal network coding, for NEC transmission, a coherent or a noncoherent model of the network may be considered, generating different approaches to the assignment of the network code. With the coherent model, a deterministic construction of MDS codes is possible under some prerequisites (e.g., field size), as explained in Section 4.1. Noncoherent models need a statistical study of the transmission and can be formalized as a transmission over a random linear operator channel (LOC) or a random matrix channel. Randomized network coding for error correction has been studied from the point of view of the existence of the code with certain error correction characteristic. Coding and decoding for random coding are tackled by different approaches. Assuming a packet-based transmission model, statistical decoding can be performed at the receiver as proposed in [47], whereas subspace coding has been proposed in [48] and further explored in [49]. Randomized coding and stochastic matrix channels are discussed in Section 4.2.

Network Bounds and Weights
The coding bounds for network codes have been defined in [7, 8]. Hamming bound, Singleton bound, Sphere-packing bound, and Gilbert-Varshamov bound define the relation between codebook size (subspace dimensions), field size, network capacity, and coding distance. A refined formulation for destinations with unequal rates to the sinks has been presented by Yang et al.  [50].
Following these definitions, we can regard the NEC with the same characteristics of classic linear channel codes in terms of the correction and detection of error and the correction of erasures. A codebook redundancy at the receiver can be defined asThe minimum coding distance has been defined in terms of coding redundancy at each sink as [47] and can be applied in the scope of symbol-based transmission, that is, the codewords at the receiver are vectors of symbols in as in (5). From the definition of coding distance follows that the number of correctable errors at the receiver is , and the number of detectable errors is . In MDS codes, the definition of minimum distance follows classic coding theory as the minimum among the network Hamming distances between the message vectors. The existence of MDS codes has been discussed by Zhang [47] (see Table 4 for the required conditions).
Silva et al. give a reinterpretation of coding metrics [49]. They make use of matrix codes formed by the packetized structure of the transmission. Minimum rank distance (MRD) codes are defined as matrix codes with a minimum distance between the elements and asThese codes attain the Singleton bound with respect to the rank metric.
Yang et al. introduced two classes of coding weights of the vectors involved in symbol-based transmission referred to the received vector , error vector , and message vector [51]. Given the classic definition of Hamming weight as the number of components in the vector that differ from the corresponding zero vector, a first class of descriptors can be defined as follows [51].
Network Hamming weight of the received vector : where the minimum is searched among all the error vectors that result in receiving the word at the receiver. Network Hamming weight of the error vector : Network Hamming weight of the message vector : Another form of network Hamming weights can be defined for the error vector in terms of the mincut of the network and on the rank of the transmission matrix resulting from the error pattern [51], that is: Mincut of an error vector : where indicates the mincut of edges included in the error pattern . Rank of an error vector: where the rank of the error pattern indicates the rank of the matrix formed by the rows in corresponding to the edges in .The network Hamming weight can be expressed as where the minimum is searched among all the error vectors that result in receiving the same word as for when the transmitted word is , which is substantially equivalent to the definition given before for the first class of network weights.
Like for generic network codes, a coherent or a noncoherent model of the network can be assumed. The coherent model assumes that the receiver decodes the transmission with a priori knowledge of the topology and the network code. Coding coefficients in case of centralized and deterministic construction are feasible with both symbol-based and packet-based approaches. Randomized routines need packet-based transmission to record the encoding kernels in the packet header.
The noncoherent model assumes that the network code is not known at the moment of the decoding. The effort of exploiting the network characteristic moves from the network to the transmitter and the receiver, respectively. The idea of linear channel operator (LOC) has been studied by various authors [48, 52, 53] and has demonstrated to be useful to model a random channel with error control coding properties.
In the following subsection, we present some of the algorithms for NEC code construction in single-source multicast networks.

4.1. Deterministic Construction

Deterministic algorithms for code construction assume that the topology of the network is known, and the possible events of error injection can be systematically analyzed. Because of the coherent model of transmission, it is possible to have MDS symbol-based codes. Nevertheless, packet-based transmission is always possible and might encourage superior error correction capacity, by exploiting at the receiver the fact that all packets share the same network code [54].

Error resiliency of generic LCM were presented before the formalization of NEC. A first deterministic algorithm was presented by Jaggi et al. together with their preservative approach to LCM [15]. The idea is to consider artificial edges feeding the nodes with random symbols, to simulate path errors. All failure patterns are checked during the design, presenting an exponential increase of computational complexity.

Yang et al. proposed two completely deterministic and centralized algorithms achieving the refined Singleton bound, including unequal flows to the sinks [50]. Network code and codebook are reciprocally matched to achieve at each receiver the maximum distance . The first algorithm determines the network coding kernels first and then constructs a proper codebook. The encoding kernels spanning the coding space are chosen, then the codebook is chosen using the basis for the network code that keep the minimum coding distance [50]. The second algorithm determines a codebook by means of traditional block-code generation and then constructs the network code. It is iterative, that is, it constructs the code edge by edge in upstream to downstream order. The choice of local coding kernel that retain the coding distance is done by eliminating the kernels that do not support the source rate under all failure patterns (that do not reduce the mincut under the rate). This technique is a generalization of the linear independence of the preservative design. The processing load and the required field size of these two algorithms are compared in Table 4.

Matsumoto proposed another centralized and deterministic algorithm, improving the algorithm of Jaggi et al. for error protection [56]. The base idea is to build an extended network with imaginary nodes feeding the regular edges and expanding the number of edge disjoint paths to the sinks. To let the new paths be independent from the regular ones, the preservative algorithm is run on the extended network. This algorithm possesses similar requirements to Yang's algorithms, as showed in Table 4.

Bahramgiri and Lahouti also proposed some variants of construction algorithms in line with Jaggi et al.'s design [55]. Three versions are considered: A first scheme called robust network coding 1 (RNC1) chooses the local kernels with a criterion of partial linear independence. All -subsets of the paths from source to sink have to be independent. A random subroutine is considered in this case, and the linear independence is verified. When paths fail, the receiver can always recover information symbols but cannot cope with the alteration of paths intended to other sinks; thus, the scheme is successful only in case of erasures in a unicast scenario. A second scheme (RNC2) considers a subset of paths which have any edge in common to any other path to other sinks. The scheme increases the complexity by a factor equal to the number of paths with joint edges, but copes with all path failures in case of erasures. A third scheme (RNC3) considers the use of backup paths. In all construction algorithms, edge-disjoint paths are considered from source to sink, not considering edges in other possible paths. Backup paths superpose with the edge-disjoint and use inactive edges as a backup in case any the other paths fail. In the algorithm, at each step, the partial independence is verified among -subsets of all possible paths. This scheme increases the complexity by a factor equal to the number of possible -subsets of paths. The three algorithms are compared in Table 4 and discussed in detail in [55].

Another greedy algorithm was proposed by Guang et al. embracing the principles of the preservative design [57]. Linear independence of local encoding kernels is guaranteed for all error patterns with linear independence testing or with a deterministic implementation similar to Jaggi et al.'s LIF and DLIF criteria, with a global complexity given in Table 4. Consider that the formulae in Table 4 have been adapted to fit a simplified notation with equal parameters for all receivers. For example, the formula for Guang et al.'s algorithm [57] considers the complexity of constructing the code for up to a number of failures, whereas we reformulate for a maximum of errors.

4.2. Randomized Construction

Randomized construction is performed when there is no knowledge of the network topology. Network coding kernels are chosen randomly, then the transfer characteristic is exploited at the receiver side. The transmission is supposed to be packet based because the receiver has to deduce the network transfer characteristic from a set of transmitted data. Algebraically, the transmission has the formulation [47]:

where is a matrix whose rows are packets of symbols transmitted on each of the paths, and is an matrix of error symbols affecting the edges. is a matrix of symbols received by sink . A randomized choice of the local encoding kernel produces a random matrix channel. The coding space projected at the receiver is correct under random link failures with increasing probability as the base field size increases. Koetter and Médard stated, in the sphere of regular codes, that, depending on the field size, a randomly generated code can be correct with up to a certain number of errors if the failure patterns that do not reduce the capacity under a threshold with probability [12]. The probability of failure of a randomized choice of the code in presence of link failures was also demonstrated by Ho et al. [31].

These probabilities were studied by Balli et al. who refined the formulation of success probability for randomized codes with codebook redundancy at the source () and variable degradation. This formulation assumes that the codebook redundancy is used to contrast network degradation up to a number of errors . It also gives the probability mass function of the minimum distance [43]. These probabilities and the minimum field size required for existence are compared in Table 3.

Guang et al. calculated the probability of the randomly generated code to be minimum distance separable (MDS) codes, besides begin space spanning under errors [44]. The given formulae express the success probability of having an MDS code with minimum distance equal or bigger than the code redundancy. These probabilities are also shown in Table 3.

There are mainly two techniques for transmitting with randomized network coding. If the global encoding kernels are implicitly communicated to the receiver (for instance in the header of the packets, as in Chou's practical framework [45]) the decoding matrix can be constructed at the receiver by parsing the packed headers. On the contrary, subspace coding is a new coding technique which does not consider the transfer characteristic of the network for decoding. These two transmission techniques have in common the distributed and randomized choice of the encoding kernels and are compared in the following.

4.2.1. Statistical Decoding

Zhang proposed decoding criteria in packet-based network coding, with the algebraic formulation as in (17). Similarly to Chou's practical framework, in packet-based statistical decoding, the sender appends to each packet a unitary base to the coding space. Namely, it inserts unitary vectors in the packet header at the source as , so that the receiver can read the global encoding kernels from the received packets and build a decoding matrix [45]. Such approach is coherent in the sense that the receivers need to know the network topology, but it is decentralized in the choice of the encoding kernels.

The decoding equation deduced from the packet headers is to be solved for error patterns of with increasing rank. The message parts of all solutions is unique for error patterns up to [47]. A brute force decoding algorithm based on the minimum possible rank of error patterns is proposed, and a fast decoder with Gaussian elimination is also presented, which exploits the fact that all symbols in a packet are subject to the same coding [47]. The latter formulation was also proposed as a technique which corrects errors up to with reasonably favorable probability, thus, beyond the capacity of a normal error-correcting code [54].

Zhang et al. also proposed a method based on hybrid FEC/NEC to increase error correction capability of the network code [54, 58]. In traditional detection/deletion methods, an error detection code is used within a packet. The hybrid system uses a local FEC code to detect errors at intermediate nodes and record the error pattern in the packets. Then network error decoding at the receiver is performed with knowledge of the error pattern, thus, achieving correction performance close to the bound of erasure correction [54].

The reliability of the decoding matrix built from the packet header can be a problem. Errors in the packet header (which affects the reception of global kernel errors and attacks from malicious nodes) are treated with similar techniques in literature. The difference is that random alterations in the packet header can be corrected, whereas adversaries may have the capability of altering the header to make it pointless at the receiver [47]. To some extent, coding against attacks from Byzantine adversaries has been related to subspace coding because of the moot reliability of the information in the packet header.

Jaggi et al. proposed in various works a simple distributed algorithms for robust encoding at the source against Byzantine adversaries [4]. The model for error injection assumes an intelligent entity with knowledge of sender and receiver intentions. Three situations are discussed: The malicious entity has knowledge of a part of the transmission paths (shared secret) of all the transmissions occurring in the network (omniscient adversary) or can jeopardize only part of the transmitted packets (limited adversary) resembling a noise injection up to a certain error rate. The proposed coding strategy is asymptotically rate optimal, that is, it achieves the Singleton bound. A generic network code is used for the internal nodes, whereas the source generates random combinations, and the receiver performs statistical decoding.

4.2.2. Subspace Coding

The subspace coding approach was first proposed by Köetter and Kschischang [48]. The transition matrix of the network becomes a channel with unknown coding properties. A linear operator channel (LOC) model can be used to model this kind of channels as studied by Yang et al. [52]. Capacity of LOCs was studied in the coherent and noncoherent case. Although channel training techniques make possible to perform statistical decoding, subspace coding assumes a channel oblivious model, in which neither the transmitter nor the receiver has knowledge of the network topology or encoding kernels.

The transformation operated by the network as LOC can be expressed similarly to (17). Considering the transmission of packets spanning an input space , the sinks receive packets spanning another space related to the input space by an operator

where is an erasure operator which returns a subspace of (corresponding to a packet erasures in the network) and is an error space which adds a dimension (injection of erroneous symbols). These spaces are in general subspaces of an ambient space with dimension . A distance between subspaces of can be formulated, giving a valid metric for , that is, the subspace set of all subspaces of (the Grassmanian of ) [48]. Such a metric is

and opens up the possibility of building codes for operator channels (subspace codes). A subspace code is a subset of , that is, a nonempty set of subspaces of . Any subspace in can be transmitted by injecting a basis and can be reconstructed at the receiver from the receiver space if the minimum distance of is bigger than (where the distance is defined as in (19) and and being the dimension of and the number of erasures, resp.) [48]. Sphere packing, sphere covering, and Singleton bounds were given for the subspace metric [48]. A Reed-Solomon construction based on linearized polynomials was also studied [48].

Silva et al. studied other metrics for subspace coding, such as the rank metric. They propose a generalized decoding problem for a family of maximum rank distance codes exploiting partial knowledge of network degradation in terms of erasures (knowledge of error location but not the value) and deviations (knowledge of error value but not the location) [49]. Their approach with rank metric is based on Gabidulin codes which is analogous to the subspace metric approach with Reed-Solomon codes.

On the other hand, Bossert and Gabidulin derived a generalization of Koetter and Kschischang construction. A class of codes with the subspace metric was defined as the intersection of rank-metric codes with lifting construction [59]. Code constructions with prescribed distance and Gilbert-type bounds were given in [60]. Gadouleau and Yun further studied packing and covering properties with subspace and rank metrics [61].

Silva and Kschischang also proposed a metric which better models the effect of adversarial error injections, which is the injection metric [62]. This metric allows better design of nonconstant-dimension codes than the subspace metric. The construction of codes for the injection metric, as well as much more other topics in subspace coding, are still open issues. This branch of network coding is not well known yet, so it is expected to provide a wide landscape of topics for future research.

The potential advantages of using subspace coding over coherent network coding have been pointed out by Zhang [63]. Coherent network correction keeps its advantages over subspace coding against random channel errors. On the other hand, the performance for erasure correction is the same for both approaches, whereas against attacks for malicious nodes the approach of rank-metric codes can achieve higher performance, because it does not need to record the encoding kernels in the header. On the other hand, centralized and deterministic approaches to code construction are seemingly impractical because such complete knowledge of the network may not be available at any instant.

5. Conclusions

In this paper, we have reviewed the recent breakthrough achievements in network coding theory. Most of the well-established knowledge of network coding is due to the work in the theory and construction algorithms for linear coding, network error correction theory, and construction algorithms. Further indepth study of network coding can be found in the books by Yeung [28], Ho and Lun [64], Fragouli and Soljanin [65, 66] and in the upcoming book edited by Médard and Sprintson [67].

Acknowledgment

This research was partially supported by the European Commission under contract FP7-247688 3DLife and FP7-248474 SARACEN.