Cognitive Modeling of Multimodal Data Intensive Systems for Applications in Nature and Society (COMDICS)View this Special Issue
Research Article | Open Access
Yan Liu, Jun Cai, Huimin Zhao, Shunzheng Yu, JianLiang Ruan, Hua Lu, "Efficient Coded-Block Delivery and Caching in Information-Centric Networking", Discrete Dynamics in Nature and Society, vol. 2020, Article ID 3838547, 16 pages, 2020. https://doi.org/10.1155/2020/3838547
Efficient Coded-Block Delivery and Caching in Information-Centric Networking
Information-centric networking (ICN) provides request aggregation and caching strategies that can improve network performance by reducing content server loads and network traffic. Incorporating network coding into ICN can offer several benefits, but a consumer may receive the same coded block from multiple content routers since the coded block may be cached by any of the content routers on its forwarding path. In this paper, we introduce a request-specific coded-block scheme to avoid linear dependency of blocks that are utilizing in-network caching. Additionally, a non-cooperative coded caching and replacement strategy is designed to guarantee that the cached blocks can be reused. Our experimental results show that the proposed scheme has superior performance to conventional CCN and two network coding-based ICN schemes.
Trends in recent years have shown that Internet users care more about what the content is rather than where the content is. Information-centric networking (ICN)  is a novel design for a future networking architecture that has been proposed as a promising alternative to the current Internet. In ICN, IP addresses are replaced by content names and content routers (CRs) are equipped with storage capabilities to cache the content passing through each router. Content is requested by Interest packets that are sent by the consumer. With in-network caching [2, 3], the content can be cached by multiple CRs, and any content router (CR) that contains the content that is being requested by the Interest can respond with a data packet, where both the Interest and the data are identified by the content name. Content-centric networking (CCN) has been shown to be a promising ICN architecture .
Network coding proposed by Ahlswede et al.  has been proven to be helpful in several different network scenarios, including peer-to-peer (P2P) , content distribution networks (CDNs) , and wireless networks [8, 9]. Recently, several studies have shown that network coding can also offer benefits to ICN [10–21], as network coding can be employed in ICN to effectively utilize multiple paths and reduce the complexity of the cache coordination. However, due to the ICN caching strategy, the same coded block may be cached by multiple CRs on its forwarding path and provided to the same consumer at a later time in response to their multicast requests .
In this case, the consumer will not be able to recover the content from the received coded blocks. Several solutions have been proposed to guarantee that all the coded blocks that are provided to the consumer are linearly independent of each other. In some centralized schemes [11, 15], central routers are used to ensure that content caching and routing strategies can provide independent blocks. In some distributed schemes [20, 21], information on the coded blocks which have already been received by the consumer must be carried by the Interest to retrieve linearly independent blocks. The CR can decide whether to respond to the Interest according to the information carried by the Interest. Therefore, several round trips will be required to obtain sufficient linearly independent coded blocks. In our previous work , the CRs only cached the original received blocks to guarantee that all the coded blocks provided to consumers were linearly independent. Any coded blocks generated and transferred were wasted.
To increase the caching efficiency and reduce the cost of computation and communication induced by centralized schemes, we propose a request-specific coded-block (RSCB) scheme to reduce the transmission volume and download delay and ensure that only a single round trip is required for the consumer to retrieve sufficient linearly independent blocks. A non-cooperative coded caching and replacing strategy is then proposed to guarantee that any two coded blocks that are cached in a network will be linearly independent. It is assumed that chunk-based routing and traffic control schemes are in place. The contributions of this paper are as follows:(i)We propose a special content delivery strategy to retrieve blocks from multiple CRs simultaneously. Each CR on the forwarding path will aggregate Interests received from multiple consumers for chunks of the same content, to eliminate duplicates. Interests received by a CR will be separated again and forwarded in different directions. A mechanism is proposed for the aggregation and separation of Interests for the chunks of content to guarantee that the minimum number of coded blocks will be requested and will be linearly independent.(ii)An on-path non-cooperative coded caching mechanism is designed to guarantee that the cached blocks can be reused. Blocks received by a CR can be encoded and cached depending on pending Interests and the proposed caching strategy.(iii)In our model, only chunks (i.e., original blocks) and coded-from-original blocks can be cached. One coded-from-original block can satisfy multiple Interests sent by different consumers requesting a set of its component chunks. A chunk-level coding-instead-of-evicting cache replacement scheme is designed to effectively increase the caching efficiency and optimize cache capacity.(iv)Our strategy is evaluated by comparison with conventional CCN and two network coding-based ICN strategies. Our experimental results demonstrate that the proposed strategy achieves the highest performance in terms of parameters such as average download time, server hit reduction rate, and cache hit rate.
2. Related Works
Network coding techniques have received much attention in a variety of network scenarios including P2P networks , CDNs , and wireless networks . Recently, several works have been proposed that apply network coding in ICN. There are two categories of solutions that can be used to ensure consumers are provided with sufficient linearly independent coded blocks: centralized strategies and distributed strategies.
Wang et al.  proposed a novel SDN-based framework to implement content caching and routing in ICN with linear network coding. The SDN controllers determine how to cache and route based on the information collected by the CR. Thus, a near-optimal caching and routing strategy can be obtained. Sadjadpour  proposed an architecture based on index coding for ICN, which groups the nodes into several clusters. The central router of each cluster maintains information on which content is cached by each node. Coded blocks generated by the central router are used to satisfy Interests for different content sent by different nodes. However, this strategy does not reduce traffic the first time content is requested. Llorca et al.  presented a multicast scheme based on network coding to achieve maximum network efficiency. However, the proposal does not mention a solution to deploy the proposed strategy in ICN. Talebifard et al.  proposed a method based on network coding that reduces the costs of coding and decoding by breaking the network into several clusters, with network coding only performed by selected nodes or clusters.
As well as centralized strategies based on network coding, some works have obtained enough linearly independent coded blocks by sending Interests repeatedly. Zhang and Xu  proposed two checking strategies to guarantee that the consumer will receive sufficient linearly independent coded blocks, which were called precise matching and RB matching. In precise matching, each Interest carries the global coefficients of the coded blocks that have already been received by the consumer. Each CR performs Gaussian elimination to check linear dependencies. Precise matching is an efficient approach to guarantee that all blocks received by consumers will be linearly independent. However, it has very high communication and computation overheads. Therefore, RB matching was proposed as a more lightweight approach, where the Interest only carries the rank of the global coefficients of the coded blocks already received by the consumer. If the number of coded blocks cached by the CR is larger than the rank of the global coefficients , the CR can respond to the Interest with a coded block. The larger the value of is, the more difficult it is to serve the Interest. Wu et al.  proposed a network coding and random forwarding-based caching strategy, CodingCache, to enhance the caching efficiency. To guarantee all the blocks provided to consumer are linearly independent, each Interest carries the global coefficients of the coded blocks already received to retrieve the next block, similar to precise matching. Therefore, rounds will be required in order to retrieve N blocks. Nguyen et al.  proposed a lightweight caching and Interest aggregation strategy to ensure that all the coded blocks received by the consumer are independent. Like RB matching, the rank of the global coefficients of the coded blocks already received by the consumer is carried by the Interest packet. Saltarin et al.  proposed a protocol named NetCodCCN to permit Interest aggregation and pipelining. Each node responds to an Interest once it has received enough coded blocks to recover the content or is larger than the number of coded blocks already sent out over face previously, where is the rank of the global coefficients of the coded blocks cached in ContentStore. However, NetCodCCN has a weakness also shared by RB matching in that it may provide false negative decisions, i.e., a node may falsely decide it cannot provide an innovative coded block for the consumer while actually the block is available. Montpetit et al.  proposed an architecture based on network coding, NC3N, where each Interest retrieves one coded block. However, there method does not include a strategy to ensure all the received blocks are independent. Liu et al.  proposed an ICN-NC method to guarantee that all the blocks received are provided by different CRs to increase the probability of obtaining linearly independent blocks. Each Interest packet contains a record of the Interest exploration range of the previous round. Only CRs within a new exploration area are permitted to respond to these Interests. Several rounds are required to retrieve enough independent coded blocks, and the Interest may retrieve linearly dependent coded blocks. The authors in  proposed a framework based on network coding for cache management in ICNs. Saltarin et al.  proposed a distributed caching strategy for ICNs enabled for network coding, which gives CRs the responsibility of estimating the popularity of contents and ensuring that the most popular content is cached near the network edge.
Most of the existing schemes require several round trips to obtain sufficient linearly independent coded blocks to recover the content. In this paper, we propose a novel content delivery strategy to ensure enough blocks can be retrieved within a single round. An on-path non-cooperative caching and replacing strategy based on network coding is proposed to guarantee that all blocks received by consumers are linearly independent. Moreover, in our scheme, coded blocks are generated only if the traffic can be saved instead of generated at the server and all CRs on the forwarding path in order to reduce the cost of coding and decoding.
3. Method of Interest Aggregation and Separation
In ICN, chunk-based delivery strategies route chunks separately. Chunks may meet on an intermediate node during their forwarding paths to several consumers. Motivated by this, we propose a special request-specific coded-block (RSCB) scheme to encode chunks that meet during transport in order to reduce traffic.
3.1. Overview of RSNC
The definitions given in our previous study (referred to as RSNC) will be followed here. Each Interest requests a specific set of chunks, where is the set of chunk indices and is number of independent coded blocks required to recover the content. Since chunks may be cached by different CRs, each CR can aggregate, separate, and forward Interests. If Interest 1 requests a set of chunks and Interest 2 requests a set of chunks , then satisfies both Interests, where is the set of chunks used to generate linearly independent coded blocks, is the number of coded blocks to be sent by the upstream CR, and . When , the traffic required to deliver chunks from upstream will be reduced.
Since an Interest sent by a consumer for multiple chunks will be copied and forwarded along a multicast tree, requests for different chunks sent from the same consumer will not meet again in a CR on the multicast tree. Therefore, the Interest aggregation operation “” is defined to combine two Interests originating from at least two different consumers, i.e.,
Similarly, a separation operation “” is defined to split an Interest into several sub-Interests:
3.2. Interest Aggregation and Separation in RSCB
In contrast with RSNC , RSCB includes information on and in the aggregated Interest which guarantees that linearly independent coded blocks are provided to consumers and minimize the number of coded blocks transported in the network. To reduce the size of the Interest, the sub-Interest information is presented as a binary number, . For example, if Interest is an aggregated Interest of Interest 1 and Interest 2 , the binary information of Interest 1 is , and the binary information of Interest 2 is , and thus . Therefore, an Interest can be expressed as , where p is the name of the requested content, S is a set of chunks that match the name of the content p, is a set of binary numbers representing the sub-Interests (each sub-Interest is a subset of ), and is the number of linearly independent coded blocks being requested. Therefore, any linearly independent coded blocks that contain all chunks specified by will satisfy the sub-Interests specified in .
Equation (1) can thus be further modified aswhere is the minimum number of linearly coded blocks satisfying both Interests and .
It should be noted that the binary number is used to represent the subset information, which is required to guarantee that the requested number of coded blocks is minimized. When or , this subset information is not necessary, as shown in Figure 1(a). Moreover, if , the information on , i.e., , will be deleted from .
For instance, Figure 1(a) shows that receives two Interests for content from different interfaces, and . Before these two Interests are forwarded, aggregates the two requests into a single Interest using equation (3). can then be used to reconstruct the subsets and .
Similarly, the separation operation used to split an Interest into several sub-Interests is modified, which is used to distribute the sub-Interests out over several interfaces of the CR towards different content sources:
If an Interest is formed by merging multiple Interests, the subsets should be reconstructed based on , and these subsets should then be separated into sub-subsets using equation (2). Then, new Interests, i.e., and in equation (4), are generated by aggregating the sub-subsets using equation (3). This procedure is described in Algorithm 1. The complexity of Algorithm 1 is , where is the number of subsets. According to Algorithm 1, can be reconstructed into two subsets and for . These subsets can be further divided into sub-subsets: , , , using equation (2), and then aggregated into Interest and Interest according to equation (3). If the original blocks and are located in the same direction and the original blocks and are located in another direction, the new Interests can be sent from two interfaces in two different directions, as shown in Figure 1(a). In this case, only two blocks will transmitted which is in contrast with RSNC , which requires four blocks to be transmitted.
If Interest 2 arrives after Interest 1 has already been sent upstream, then the aggregated pending Interest will be . Since may contain some chunks that have also been requested by Interest , these chunks should be removed from Interest 2. Therefore, we define an operation to determine incremental Interest based on the separation operation:
Since Interest 1 will return at most coded blocks, is required. If and , we let and . Similarly, if a CR has cached a subset of blocks, only the remaining blocks need to be requested from the upstream CRs, where . In RSCB, the coded blocks generated by the original blocks (referred to as coded-from-original block) are cached by the first node en route to consumers. The coded-from-original blocks are used as the original blocks to satisfy future Interests. For example, in Figure 1(b), the coded-from-original block, , can be presented as . When Interest is received by , the incremental Interest is determined and sent to the next node.
The benefits of RSCB are illustrated in Figures 1 and 2. In Figure 1(a), two consumers connected to router and have requested content, which contains four original blocks, , and at the same time. Each original block has a size of one unit, each CR has a two-unit cache capacity, and each link has a one-unit transmission cost. Figure 1 shows the communication and caching in RSCB. For RSCB, and receive two coded-from-original blocks generated by and , respectively. Coded-from-original blocks and are received from , where
The total transmission cost is eight units. Figure 2 shows the conventional ICN communication. receives the four original blocks () from and and then forwards these original blocks to . responds to Interest with two original blocks, and , and Interest with two further original blocks, and . The total transmission cost is 12 units. Therefore, our proposed solution saves 33% of the transmission cost compared with conventional ICN and 20% more than our previous work .
3.3. Caching in RSCB
In RSCB, the original/coded-from-original blocks are cached by CRs to respond to future Interests. In order to provide consumers with sufficient linearly independent blocks in a single round, none of coded blocks that were encoded by coded blocks are cached in the network. The coded-from-original block only can be cached by a single CR, which is the immediate downstream neighbor of the CR that generated the block. Thus, the two coded-from-original blocks, and , will be cached only by (Figure 1(b). The coded-from-original block can be used as the original block or to respond to future Interests. When cache replacement happens, CR encodes several original blocks into one coded-from-original block to release the caching space. This ensures that all information contained in the original blocks is retained in the CR.
CCN architecture is the most popular architecture of ICN, and we have selected this architecture to implement RSCB. To enable network coding in CCN, some changes are required.
4.1. Content Publishing and Requesting
In CCN, content is split into several smaller-sized chunks, with each chunk identified by a unique hierarchical name. In RSCB, content is firstly divided into generations and each generation is then divided into chunks, i.e., original blocks. We denote content as , where is the name of the content, is a design parameter, and can be calculated based on the content size and . The content name should contain information on and . For instance, the name of chunk is /sysu.edu.cn/largefile/h/N/2/1, where /sysu.edu.cn/largefile/h/N is the content name, , 2 is the generation ID, and 1 is the chunk ID (referred to as the original block index).
A consumer will generate a set of Interests, , for each generation in order to request content . Each Interest requests a set of original blocks, , where is the number of generations. The number of Interests sent by the consumer depends on the flow control schemes which have been placed in the network. The consumer can send Interests either sequentially or simultaneously.
Since the forwarding paths of requests for different chunks generated by the same consumer will form a multicast tree, these requests will not meet in any intermediate CR on the multicast tree. Interests are responded to with chunks or coded blocks which are linear combinations of chunks that have been specified by the Interest. Random linear network coding (RLNC) is used to generate the coded blocks within each generation. For convenience, in the remainder of this paper, we will not explicitly state which generation each chunk belongs to. Our model makes the assumption that a chunk-based routing and flow control scheme is in place.
4.2. Interest and Data
All communications are driven by consumers in CCN. Consumers can receive chunks of content from multiple sources, which may include the content provider and CRs. A consumer interested in will send a set of requests with one request for each chunk. Before these requests are forwarded, the CR determines the forwarding interface of each chunk using the forwarding information base (FIB). Requests that have the same forwarding interface will be aggregated into a single Interest. Each Interest can contain multiple requests for a set of chunks, , where is the set of chunk names.
There are two types of CCN packets, Interests and data. In our model, the network coding information is appended to the selector field of the Interest packet and includes the set of chunk names , the sub-Interests , and the number of required blocks . The coefficient of the coded blocks, the caching flag , is contained within the signed info field of the data packet. The data field of the data packet contains the original/coded block(s).
The Interest and data packets used in our model are formulated using the following method:(i) defines an Interest for content , where is the content name, is the set of chunk names, and is the number of required blocks. is used to represent the subsets and , if . For convenience, we express the Interest as .(ii) defines a data packet for content containing , which is either a chunk or a coded block (a linear combination of several chunks specified by set ). is a caching flag which indicates whether the is cacheable or not. If the is a coded block, we obtain , where is the set of chunk names carried by the data and is carried by the Interest.
4.3. Forwarding Module
The forwarding module of RSCB contains three components: the ContentStore (CS), the pending interest table (PIT), and the forwarding information base (FIB). The blocks received by the CRs are cached by the CS module. A CS entry can be formulated by , which is defined as follows:(i): content name.(ii)Index : is a set of chunk names and is the number of cached blocks. Since both the original and the coded-from-original blocks can be cached, we obtain .(iii)Data: the original or coded-from-original blocks.
In contrast with CCN, each CR interface in RSCB maintains a PIT. The PIT can have two types of entry, PIT-OUT and PIT-IN, which record information on Interests already sent or received to the interface, respectively. PIT-OUT and PIT-IN can be defined as follows:(i): this is a PIT-OUT entry that indicates that is an aggregated Interest generated by aggregating several Interests received from interface(s) of . The aggregated Interest has already been sent out over the interface but a response has not been received from the upstream CR.(ii): this is a PIT-IN entry that indicates that is an aggregated Interest generated by aggregating several Interests received from interface . A response has not yet been sent over interface .
The FIB is the same as for CCN. When an Interest is received by a CR, its CS is first consulted, followed by PIT and finally FIB. Data packets will be sent back to consumers using the same path that was created by the Interest, but in the opposite direction.
5. Communication Scheme
5.1. Forwarding Interest
When a CR receives an Interest over interface , the first step is to check its CS. If the CS contains all chunks or the coded-from-original blocks containing all the information in the Interested set , the CR will respond to Interest directly, as described in Algorithm 2. The complexity of Algorithm 2 is , where is the number of coded blocks. If , the CR responds to Interest with the cached blocks (chunks or coded-from-original blocks) without coding; otherwise, the CR will respond with the coded blocks generated from the blocks cached in the CS, which contain the chunk information specified by set . The caching flag, , will be turned on, i.e., Fq = 1 if the block used to respond to the Interest is an original block or a coded-from-original block encoded by that CR; otherwise, Fq = 0. In RSCB, the CR performs network coding only if it will save on the transmission costs. For instance, as shown in Figure 1(a), responds to Interest with two coded-from-original blocks, and , which were received from and , respectively, without further coding. In this case, there is a saving on the cost of coding.
If the Interest cannot be satisfied by the CR, PIT-IN of the arrival interface will be checked. If there is a matched entry , CR will aggregate the PIT-IN entry and the Interest using equation (3) and will then update the PIT-IN entry , where , . Therefore, the incremental Interest will be determined.
The CR will split the incremental requests for into several Interests using Algorithm 1. If one of the Interests, e.g., , needs to be transmitted over the interface , PIT-OUT of interface will be obtained. If there is a matching PIT-OUT entry , a new incremental Interest for will be generated using equation (5) and transmitted over interface if . The PIT-OUT entry will then be updated to , where and , and interface will be added to . Algorithm 3 describes the process used to forward an Interest. The complexity of Algorithm 3 is , where is the number of sub-Interests.
5.2. Forwarding Data
When data packet is received by a CR over interface , the PIT-OUT of interface will be checked in the tables. If there is no PIT-OUT match, the data will be discarded directly since the CR has not requested the block; otherwise, the caching flag will be checked and the matching PIT-OUT entry will be updated according to Algorithm 4. If , the block carried by data will be cached into CS; otherwise, it will be temporarily cached into CACHE. The CR will then check whether more chunks can be obtained by decoding the blocks cached in CS and CACHE. If the CR has already received enough blocks of content to recover the content, the chunks decoded from the received blocks will be cached into CS and all of the blocks of content that were cached in CACHE will be deleted. In this case, CR can satisfy any Interest of content .
If interface is included in the of the matching PIT-OUT entry, the corresponding PIT-IN entry is . If , CR checks whether the can be satisfied using blocks cached in CS and CACHE. If it can be satisfied, CR will generate linearly independent combinations of the blocks specified by the set and will send data packets over interface . Each data packet carries a coded block and is then deleted. Once the PIT-IN entries of all interfaces in of the matched PIT-OUT entry are satisfied, the coded blocks of content p that are cached in CACHE are deleted, as described by Algorithm 4. The complexity of Algorithm 4 is , since the complexity of Algorithm 2 is , where is the number of interfaces in the of the matched PIT-OUT and is the number of coded blocks.
The network will try to deliver chunks without introducing extra traffic in order to increase the independence of the blocks cached in CRs. When the CR receives a data packet carrying a chunk (i.e., ), if , the data packet will be sent out over interface without further processing or waiting for other data packets. The CR will then update to , where , . If , the CR will delete the PIT-IN entry. In this case, the time to download chunk will be reduced and the cost of coding and decoding is saved without introducing additional traffic.
5.3. Cache Policy
Network coding-enabled ICN (NC-ICN) will divide the content into original blocks. For traditional NC-ICN, each coded block is a linear combination of the original blocks. The coded blocks will be cached by () CRs along their forwarding paths to a group of consumers. Figure 3(a) shows that for traditional NC-ICN, network will provide coded blocks, , to consumers in group and network will provide the remaining coded blocks. During the process of responding to consumers in group , () coded blocks generated by will be cached by multiple CRs in network , while () coded blocks will be cached by multiple CRs in network . At a later time, when the consumers in multicast their Interests for coded blocks of content , these Interests will be received first by CRs in network . Each CR will respond to the Interest with its cached coded blocks independently, and thus coded blocks cached in network may be provided to consumers in . However, the maximum number of independent coded blocks that a consumer can receive from network is . In this case, at least blocks are not beneficial to the consumer and are a waste of resources. Therefore, the conventional in-network caching strategy is not suitable for NC-ICN.
To address this issue, we propose to introduce a simple cache mechanism to guarantee that the blocks provided to consumers will be independent. In RSCB, only original blocks and coded-from-original blocks will be cached by CRs. None of coded blocks that were encoded by other coded blocks can be cached in the network. The received/decoded original blocks can be cached by any CR. The coded-from-original blocks can only be cached by a single CR, which is the immediate downstream neighbor of the CR that generated the block. Thus, any coded-from-original blocks cached in the network will be linearly independent, where is the number of blocks required to recover the content. in data packet is a caching flag that indicates whether the is cacheable or not.
Since CRs have limited storage capacity, a cache replacement policy is required. When cache replacement occurs, the candidate content that is to be discarded is selected using the existing content-level cache replacement policy, e.g., least recently used (LRU). Assume units of cache space are required to cache newly received/decoded blocks and the cache space used to cache the candidate content is units (one unit for each block). If , content is deleted and . The first step is repeated until only some of cached blocks of the candidate content need to be discarded to cache new blocks, i.e., . Chunk-level cache replacement is then introduced to discard blocks in content . Firstly, any original blocks that are contained in the cached coded-from-original blocks are discarded, i.e., the information contained in the original blocks and may also be contained in the coded-from-original blocks . Secondly, the remaining original blocks are coded into a single coded-from-original block. Finally, the received coded-from-original blocks are randomly discarded. These three steps are performed in turn until there is sufficient space for the newly received/decoded blocks, as described in Algorithm 5. The complexity of Algorithm 5 is , where is the number of evicted content. If content is rarely accessed, only the coded-from-original blocks will be cached to respond to future Interests. Since a single coded-from-original block can respond to an aggregated Interest containing multiple requests for different chunks sent by multiple consumers, our chunk-level cache replacement policy can effectively increase the cache efficiency.
In this section, the performance of our model is investigated by comparison with other three schemes: chunk-level CCN strategy (CCN) , NC-CCN , and CodingCache (CC) . The caching strategy leave copy everywhere (LCE) is incorporated into the above strategies. In LCE, each block or chunk is cached by all CRs on the forwarding path between the content provider and the consumer.
6.1. Simulation Model
BRITE [27, 28] was used to generate the network topology, since it can roughly reflect the actual Internet topology. The Dijkstra algorithm was used to generate the FIB tables. All links have a bandwidth of 1 Gbps. There were a total of 1000 end hosts that were connected to 100 CRs and 10 original content providers were randomly connected to the CRs. 10,000 files were equally partitioned into 400 classes. Each content packet was 1 GB and was divided into 10 generations with each generation containing 10 chunks; each chunk size was 10 MB. In our simulation, only chunks in the same generation could be encoded. The content popularity follows a Zipf distribution with . Interests sent by consumers follow a Poisson process. The request number was defined as the number of Interests sent by consumers during the processing period. In our simulations, each CR was equally configured to have a cache space of 0.1%, 0.25%, 0.5%, 1%, and 2% of the overall content catalog size. The default cache size of each CR was set to 10 GB for caching, i.e., 1% of the total content catalog size. Random linear network coding (RLNC) was used for coding. The size of a finite field was . The coefficient vector and the generation ID are contained within the signed info field of the data packet. The performance of all four strategies was evaluated under the same simulation environment.
The following parameters were used for the evaluation:(i)Average download time: the average time for consumers to download each successfully received content request response.(ii)Average number of hops: the average number of hops for each successfully received chunk from the provider to the consumer.(iii)Cache hit ratio: the ratio of the number of Interests that were satisfied by the caches to the number of Interests that were satisfied by either the caches or the server.(iv)Server hit reduction ratio : where if the chunk is sent from a cache or an aggregated Interest; otherwise, . is the number of chunks received by all consumers. indicates that the data were collected from time to .(v)Traffic: the total traffic to deliver the data packets over the whole request process.(vi)Average number of Interests: the average number of Interest packets that were handled by each CR for each chunk that was successfully received by the consumer, as in .
6.2. Simulation Results
Due to its network coding-based content delivery and caching strategies, our proposed RSCB always achieves the best performance in terms of having the shortest average download time, the highest cache hit ratio, the lowest server hit reduction ratio, and the lowest transmission volume. RSCB ensures that consumers will receive sufficient independent coded blocks within a single round and the coded blocks cached by the CRs can be used as multiple chunks.
Figure 4 plots the average download time of the four caching strategies for different system parameters. The average download time decreases as the Zipf parameter is increased, as shown in Figure 4(a), since a larger Zipf parameter indicates that the Interests sent by consumers are concentrated on a smaller set of contents. As the number of requests increases, chunks that have already been requested will be cached on more CRs and thus consumers can retrieve chunks directly from the CRs, which are much closer to the consumers. Therefore, the average download time will be reduced (Figure 4(b)). RSCB performs much better even for a small Zipf parameter and a low number of Interests, since it can retrieve chunks or coded blocks from multiple CRs simultaneously. Compared with other schemes, RSCB provides consumers with enough independent coded blocks in a single round.
In RSCB, one coded-from-original block can be used to respond to an aggregated Interest for multiple chunks requested by different consumers. For instance, the coded-from-original block, , can satisfy the Interest for chunk from consumer 1 and the Interest for chunk from consumer 2, as shown in Figure 1(b). Thus, RSCB achieves the best caching performance, in terms of average download hops (Figure 5), cache hit ratio (Figure 6), and server hit reduction ratio (Figure 7).
Figure 8 shows the traffic for different caching schemes, and it can be seen that RSCB has the lowest transmission volume. Moreover, we can see that as the number of requests increases, RSCB has a higher traffic saving too due to its Interest aggregation scheme which saves on traffic required to deliver blocks, as per equation (3).
RSCB can also aggregate Interests for different chunks into a single Interest. As shown in Figure 9, the average number of Interests processed by the CR is much lower compared with other schemes. In ICN, a consumer requests content with chunks by sending out Interests, and thus the CR needs to process Interests. However, in RSCB, only one aggregated Interest containing several Interests will be processed by the CR. This can reduce the cost of transmitting and processing the Interest.
7. Conclusion and Discussion
In this paper, we have proposed a request-specific coded-block strategy to reduce the transmission volume. Additionally, a chunk-level on-path non-cooperative coded caching and replacing strategy has been proposed to improve the caching efficiency. Our method enables a consumer to multicast a set of Interests in order to obtain multiple content chunks simultaneously from multiple CRs. The traffic can be reduced by encoding chunks that meet in an intermediate CR and have been requested by different consumers. A novel Interest forwarding-responding strategy has been proposed to guarantee that the minimum number of coded blocks will be requested and that the blocks will be linearly independent. A network coding-based caching and replacing mechanism has been designed to guarantee that the cached blocks can be reused. A chunk-level coded cache replacement strategy has been proposed to discard blocks. Rather than discarding the original blocks, the CR will encode the original blocks into a single coded-from-original block to release cache space when cache replacement is required. A single coded-from-original block can satisfy multiple Interests from different consumers for a set of its component original blocks. Therefore, this will increase the caching diversity without requiring extra cache space. The simulation results have confirmed that the RSCB scheme outperforms the other three strategies.
However, although there are many benefits in deploying network coding in ICN, it also introduces some additional costs for computation and communication. Some studies have already proven that RLNC is a practical method which has acceptable costs. Since ICN is a new architecture, there are still many issues that need to be resolved before ICN can be deployed, such as efficient operation of PIT and FIB at a chunk level [30, 31].
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This study was supported by the National Natural Science Foundation of China (nos. 61571141, 61702120, 61972104, and 61902080), the National Key Research and Development Project (no. SQ2019YFB180098), the Guangdong Natural Science Foundation (no. 2017A030310591), the Guangdong Provincial Application-Oriented Technical Research and Development Special Fund Project (nos. 2017B010125003 and 2015B010131017), the Key Areas of Guangdong Province (nos. 2019B010118001 and 2017B030306015), the Guangdong Future Network Engineering Technology Research Center (no. 2016GCZX006), the Science and Technology Program of Guangzhou (no. 201604016108), the Project of Youth Innovation Talent of Universities in Guangdong (nos. 2017KQNCX120 and 2016KQNCX091), the Guangdong Science and Technology Development Project (no. 2017A090905023), the Key Projects of Guangdong Science and Technology, and the Science and Technology Project in Guangzhou (no. 201803010081).
- B. Ahlgren, C. Dannewitz, C. Imbrenda, D. Kutscher, and B. Ohlman, “A survey of information-centric networking,” IEEE Communications Magazine, vol. 50, no. 7, pp. 26–36, 2012.
- I. U. Din, S. Hassan, M. K. Khan, M. Guizani, O. Ghazali, and A. Habbal, “Caching in information-centric networking: strategies, challenges, and future research directions,” IEEE Communications Surveys and Tutorials, vol. 20, no. 2, pp. 1443–1474, 2018.
- S. Lee, I. Yeom, and D. Kim, “T-caching: enhancing feasibility of in-network caching in icn,” IEEE Transactions on Parallel and Distributed Systems, vol. 31, no. 7, pp. 1486–1498, 2020.
- V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard, “Networking named content,” Communications of the ACM, vol. 55, no. 1, pp. 117–124, 2012.
- R. Ahlswede, N. Ning Cai, S.-Y. R. Li, and R. W. Yeung, “Network information flow,” IEEE Transactions on Information Theory, vol. 46, no. 4, pp. 1204–1216, 2000.
- B. Saleh and D. Qiu, “Performance analysis of network-coding-based p2p live streaming systems,” IEEE/ACM Transactions on Networking, vol. 24, no. 4, pp. 2140–2153, 2016.
- C. Gkantsidis and P. Rodriguez, “Network coding for large scale content distribution,” in Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 4, pp. 2235–2245, Miami, FL, USA, March 2005.
- M. Karmoose, M. Cardone, and C. Fragouli, “Simplifying wireless social caching via network coding,” IEEE Transactions on Communications, vol. 66, no. 11, pp. 5512–5525, Nov 2018.
- C. Xu, P. Wang, C. Xiong, X. Wei, and G.-M. Muntean, “Pipeline network coding-based multipath data transfer in heterogeneous wireless networks,” IEEE Transactions on Broadcasting, vol. 63, no. 2, pp. 376–390, 2017.
- M. Bilal and S.-G. Kang, “Network-coding approach for information-centric networking,” IEEE Systems Journal, vol. 13, no. 2, pp. 1376–1385, 2019.
- H. R. Sadjadpour, “A new design for information centric networks,” in Proceedings of the 48th Annual Conference on Information Sciences and Systems (CISS), pp. 1–6, Princeton, NJ, USA, March 2014.
- J. Wang, J. Ren, K. Lu, J. Wang, S. Liu, and C. Westphal, “An optimal cache management framework for information-centric networks with network coding,” in Proceedings of the 2014 IFIP Networking Conference, pp. 1–9, Trondheim, Norway, June 2014.
- Q. Xiang, H. Zhang, J. Wang, G. Xing, S. Lin, and X. Liu, “On optimal diversity in network-coding-based routing in wireless networks,” in Proceedings of the 2015 IEEE Conference on Computer Communications (INFOCOM), pp. 765–773, Kowloon, Hong Kong, April 2015.
- J. Llorca, A. M. Tulino, K. Guan, and D. C. Kilper, “Network-coded caching-aided multicast for efficient content delivery,” in Proceedings of the 2013 IEEE International Conference on Communications (ICC), pp. 3557–3562, Budapest, Hungary, 2013.
- P. Talebifard, H. Nicanfar, and V. C. Leung, “A content centric approach to energy efficient data dissemination,” in Proceedings of the 2013 IEEE International Systems Conference (SysCon), pp. 873–877, Orlando, FL, USA, April 2013.
- Q. Wu, Z. Li, and G. Xie, “Codingcache: multipath-aware ccn cache with network coding,” in Proceedings of the 3rd ACM SIGCOMM Workshop on Information-Centric Networking-ICN’13, pp. 41-42, USA, 2013.
- M.-J. Montpetit, C. Westphal, and D. Trossen, “Network coding meets information-centric networking: an architectural case for information dispersion through native network coding,” in Proceedings of the 1st ACM workshop on Emerging Name-Oriented Mobile Networking Design-Architecture, Algorithms, and Applications-NoM’12, pp. 31–36, USA, June 2012.
- W.-X. Liu, S.-Z. Yu, G. Tan, and J. Cai, “Information-centric networking with built-in network coding to achieve multisource transmission at network-layer,” Computer Networks, vol. 115, pp. 110–128, 2017.
- J. Saltarin, E. Bourtsoulatze, N. Thomos, and T. Braun, “Netcodccn: a network coding approach for content-centric networks,” in Proceedings of the IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications, pp. 1–9, San Francisco, CA, USA, April 2016.
- D. Nguyen, M. Fukushima, K. Sugiyama, and A. Tagami, “CoNAT: a network coding-based interest aggregation in content centric networks,” in Proceedings of the 2015 IEEE International Conference on Communications (ICC), pp. 5715–5720, London, UK, June 2015.
- G. Zhang and Z. Xu, “Combing CCN with network coding: an architectural perspective,” Computer Networks, vol. 94, pp. 219–230, 2016.
- C. Shan, J. Cai, Y. Liu, and J. Luo, “Node importance to community based caching strategy for information centric networking,” Concurrency and Computation: Practice and Experience, vol. 31, no. 21, Article ID e4797, 2019.
- Y. Liu and S.-Z. Yu, “Network coding-based multisource content delivery in content centric networking,” Journal of Network and Computer Applications, vol. 64, pp. 167–175, 2016.
- J. Wang, J. Ren, K. Lu, J. Wang, S. Liu, and C. Westphal, “A minimum cost cache management framework for information-centric networks with network coding,” Computer Networks, vol. 110, pp. 1–17, 2016.
- N. Lal, S. Kumar, and V. K. Chaurasiya, “A network-coded caching-based multicasting scheme for information-centric networking (ICN),” Iranian Journal of Science and Technology, vol. 43, no. 3, pp. 427–438, 2019.
- J. Saltarin, T. Braun, E. Bourtsoulatze, and N. Thomos, “Popnetcod: a popularity-based caching policy for network coding enabled named data networking,” 2019, http://arxiv.org/abs/1901.01187.
- A. Medina, A. Lakhina, I. Matta, and J. Byers, “Brite: an approach to universal topology generation,” in Proceedings of the Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pp. 346–353, Cincinnati, OH, USA, August 2001.
- A. Medina, I. Matta, and J. Byers, “On the origin of power laws in internet topologies,” ACM SIGCOMM Computer Communication Review, vol. 30, no. 2, pp. 18–28, Apr. 2000.
- C. Fragouli, J. Widmer, and J.-Y. Le Boudec, “Efficient broadcasting using network coding,” IEEE/ACM Transactions on Networking, vol. 16, no. 2, pp. 450–463, 2008.
- Y. Wang, K. He, H. Dai et al., “Scalable name lookup in ndn using effective name component encoding,” in Proceedings of the ICDCS ’12, pp. 688–697, June 2012.
- T. Song, H. Yuan, P. Crowley, and B. Zhang, “Scalable name-based packet forwarding: from millions to billions,” in Proceedings of the ACM-ICN’15, pp. 19–28, San Francisco, CA, USA, 2015.
Copyright © 2020 Yan Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.