Abstract

Vehicular Named Data Network (VNDN) is considered a strong paradigm to deploy in vehicular applications. In VNDN, each node has its cache, but due to limited cache, it directly affects the performance in a highly dynamic environment, which requires massive and fast content delivery. To reduce these issues, the cooperative caching plays an efficient role in VNDN. Most studies regarding cooperative caching focus on content replacement and caching algorithms and implement these methods in a static environment rather than a dynamic environment. In addition, few existing approaches addressed the cache diversity and latency in VNDN. This paper proposes a Dynamic Cooperative Cache Management Scheme (DCCMS) based on social and popular data, which improves the cache efficiency and implements it in a dynamic environment. We designed a two-level dynamic caching scheme, in which we choose the right caching node that frequently communicates with other nodes, keep the copy of the most popular content, and distribute it with the requester’s node when needed. The main intention of DCCMS is to improve the cache performance in terms of reducing latency, server load, cache hit ratio, average hop count, cache utilization, and diversity. The simulation results show that our proposed DCCMS scheme improves the cache performance than other state-of-the-art approaches.

1. Introduction

With the rapid development in the vehicular population [1], many applications and services are merged to provide desired data to the requester’s vehicle. In this concern, the VANETs become in this race with the modifying version of Mobile Ad hoc Networks (MANETs) [2], which is aimed at improving the communication in moving vehicles and covering the large-scale traffic environment. Increasing wireless content traffic has a great burden on VENETs. As per the report of [3], 60% of content belongs to video or audio. Due to the growing size of data traffic and maintaining audio or video streaming, many studies have proposed solutions [4]. In these studies, content delivery is one of the bottlenecks in VANETs due to the dynamic environment and frequent disconnect connections between different nodes. The future paradigm Named Data Network (NDN) brings efficient services [5], which are more concerned about data. NDN works the terms “What” rather than “Where.” Besides, in NDN, every node has cache and three data structures: (1) Content Store (CS), (2) Pending Information Table (PIT), and (3) Forward Information Table (FIB). Therefore, the VANETs turned into Vehicle Named Data Networks (VNDN) due to its incredible services. VNDN has been proposed as a key promising solution, especially for mobile network caching, and VNDN plays a vital role in monitoring traffic issues and supports road safety [6] applications.

The main focus of VNDN is content distribution between nodes in the network. In traditional vehicular networks (VNs), a specific IP address [7] is allocated to every node for information sharing to another node. Moreover, a secured channel must connect with the source and destination sides to transmit confidential content. So, traditional networks bring the mobility management issue to transfer the secured data between nodes. In particular VNDN environment, each node is self-detecting via content name and self-authorize. The consumer and their neighboring nodes probably travel in the same direction, and with the same speed, the link could be more stable and reliable. Moreover, the popular content may be fetched easily and may reduce the mobility issues in VNDN. Recently, the vehicular network caching manner is considered noncooperative in [8]. The caching decision is based on local demand rather than global demand, which may be difficult for the consumer to access desired data from the producer (nearby nodes) and raise the mobility and access delay issue.

Although VNDN has many advantages, the VNDN in dynamic traffic brings many issues such as network performance, cache miss, data delay, data loss, mobility, and resource managing. Figure 1 illustrates the following data access challenges in VANET scenario. (i)Let us assume that at the time the sends the request, and the forwarding distance between and is 120 m. The is moving at 60 km/h while is moving with a speed of 80 km/h. When the desired content is reached at from , unfortunately, the is moved out from the transmission range of . This happened due to a noncooperative method and the mobility speed of ; as a result, cannot receive the desired corresponding data due to the large distance from V3.(ii)Suppose the transmission range is larger than . In this situation, the may reach near , and forwards the request from . In this case, the again failed to deliver the desired corresponding data to (iii)The network faced delays in data, data loss, and cache missing due to ineffective cache management strategy in VNDN

To enhance the performance of VNDN, the cooperative edge cache methods are introduced in [9] to improve the hit ratio of cache, network traffic management, and data access latency as per results with other baselines strategies.

A caching strategy and social interactions are considered to improve the better communication, secured, and entertainment in VANET’s applications [10]. In this study, the cooperative caching methods are divided into two groups: (i) on-path cache and (ii) off-path cache based on where content is cached. In the on-path method, content is cached via some nodes using the reverse path technique from the consumer to the producer, while in the off-path method, some specific nodes are involved to cache the content for the consumer. According to the author, nodes with the same interests can make better connectivity links and efficient dissemination in vehicular networks. Many strategies are proposed to improve the cache performance in literature, but fewer cache methods are implemented in dynamic scenarios [11]. The existing strategies are based on the fixed probability that increases content redundancy and maximizes hop count distance and others based on content popularity which less popular content ignores, resulting in low diversity. In the light of the above discussion, an excellent caching management strategy is required to mitigate these issues. Therefore, introduce the Dynamic Cooperative Cache Management Scheme Based on Social and Popular Data in VNDN. The main focus of DCCMS is to improve the cache performance via popular data and social interaction in VNDN and build a reliable communication link among nodes based on master node. We design the master node that acts hierarchically and collaborates with neighboring nodes to keep the most popular contents in the cache. Our method uses maximum available cache resources to minimize the latency and improve the cache hit ratio performance. The master node is aimed at improving the overall network performance by keeping the popular content for a long time and sharing content with the requester as per demand. The existing strategies keep recent popular contents individually rather than the whole network. Moreover, social interaction between nodes is necessary to keep the complete history of popular contents in the whole network and evict it at a specific time slot. The main contributions of our paper are listed as follows: (i)We present a novel two-level caching scheme that distributes popular content in VNDN networks(ii)The master node is introduced to fulfill the future request from nearby nodes. The master node stores the content which cannot be accommodated from nearby nodes due to limited storage capacity. Moreover, the master node supports social interaction with other nodes to get the copies of the most popular contents (MPC) in the whole network rather than a single node(iii)We design the eviction policy based on content popularity and time slot to cache contents’ freshness to deal with less popular content(iv)The comprehensive simulation shows that our scheme is good for dynamic traffic where the primary objectives are the highest cache hit, low latency, maximum cache utilization, and efficient network performance

The rest of this paper is organized as follows. We discuss the related work in Section 2. In Section 3, the system model and problem formulation are discussed. We presents DCCMS strategy in Section 4. The comprehensive evaluation and DCCMS performance are demonstrated in Section 5, and finally, we conclude the work in Section 6.

In this section, we discussed some recent studies related to our scheme. The local cache is used to minimize the computational cost in the network. On-path cache (OPC) attracts the research community’s attention due to its local uses; the term cooperative caching is very famous in vehicular networks to enhance the overall network performance. In [12], the authors proposed a cooperative cache to estimate the router capacity and cached the popular contents only. The collaborative caching model is based on the hierarchical method in [13] to fetch the contents from the nearest node in the network. In this proposed model, overhead occurred due to heavy network load in VNs. The authors proposed a caching strategy-based two-tier heterogeneous model to optimize the caching probability using two helpers for the edge server [14]. The IDCC [15] scheme is designed to improve cache performance. In this study, the static topology is used to get the desired results based on the probabilistic cache method. A content delivery scheme [16] is designed for social communities to park the vehicle. The vehicles are communicated using distributed content to minimize the cost of network and content latency. The content replacement strategy is discussed in [17, 18]; the content copy is moved down on each node to replace the content. Due to this method, the memory cost and redundancy will increase, and the root node cannot keep at least one copy of the content. In [19], a probability-based caching scheme is designed to get the cache capacity and root node distance. This cache decision is based on the hop count of the server and the content router size. The authors developed a framework for content distribution in [20] networking cache and across the domain. To investigate the cooperative cache scheme performance, the authors [21] introduced a fog-based social aware network and designed a content sharing scheme that allows the nodes’ network to collaborate content cache locally. To improve the latency in VANET, the authors discussed the collaborative routing in [22] and, for data dissemination, choose the vehicle as edge node in [23]. In [24, 25], the authors introduced multitier cooperative cache architecture to deal with popular contents in different networks and improved the delayed hit ratio in caching. Similar approaches for heterogeneous networks are proposed in [26, 27, 28, 29], where caching nodes are introduced to reduce the content delivery ratio. Furthermore, in [30, 31], the study designed a caching scheme that incorporates the popularity of content depending on the cache decision. To determine the space and replacement content in the cache, the authors explore the idea in [32] that the proposed algorithm for popularity and replacement used static parameters. Moreover, in [33], cache strategy is introduced based on centrality, in which an intermediate node is chosen to cache the contents. The results show to improve the cache utilization and reduce content redundancy in the static network. The approach of cache consistency for ad hoc networks is introduced in [34], which used the Global Positioning System (GPS) and pull-based approach to enhance performance. The same work is discussed in [35], which used a hybrid approach to pull and push to select relay nodes between the cache and producer node. The cache nodes determine the cached data using a relay node. Many authors designed the algorithm for maintenance cache consistency and replacement content in [36, 37, 38, 39] to solve the cache consistency problems. In [40], the authors increase the performance of retrieval information from the cache in Mobile Ad hoc Networks (MANET) using location maintenance. Similar studies [41, 42] extract information based on user social interaction by base station (BS). The AlwaysCache strategies are discussed in NDN [43, 44, 45, 46], where incoming contents are always cached on all nodes without replacement. However, due to accepting every new content generates redundancy, and maximizing the memory cost in the network. The cache strategies are discussed in [47, 48] with fixed probability. According to its results, cache memory is reduced with less performance in diversity and content replication. However, the authors introduced DPC using a probabilistic strategy in [49]; this method increased the delay due to lack of collaboration with other nodes. The PCMP work is discussed in [50], where popular contents are cached at only RSU; due to this, the cache hit ratio decreased with low velocity and increased hop count with high velocity. The performance of these studies is lower than that of DCCMS when implemented in a dynamic environment to get the core objective in terms of the cache hit and latency. Therefore, we proposed DCCMS to mitigate these issues, enhance network performance regarding cache hit, and reduce latency.

3. System Model

3.1. Network Model

In this section, we discussed network model and problem formulation for VNDN.

In VNDN, the consumers can send the request to the content producer to get the desired content. The Server, RSUs, and neighboring vehicles could provide content to the consumer. Let us consider Figure 2 as a network scenario for VNDN by graph , where indicates a set of network nodes and is the communication link between the nodes. A network consisting of vehicle can request the contents and get the desired data from the Caching nodes or server node . We assume double-direction road with mobility speed for vehicle movement, and the transmission coverage of RSU is the one hop-based cluster, where and represents the clusters vehicle at . The definitions of the main symbols employed in this paper are summarized in Table 1.

3.2. Packet-Flow Structure

We designed the packet-flow structure integrated with the default NDN Type Length Values (TLVs) packet structure for unique content to perform the desired action according to our proposed scheme. Figure 3(a) shows the interest packet structure, and in Figure 3(b), Acknowledgment (Ack) attribute is added to get the information about content availability and data-chunks size.

3.3. Cache Model

We consider that all contents are defined by the Poisson process with content file files in which . Each node requests the using the Zipf distribution for content probability. Let us assume that the requests the random content at time with the content size . If the desired request is available in the local cache associated with , then the request will be fulfilled, and no need to send other nodes. Otherwise, the communicates with neighboring nodes (vehicles and RSUs). If the requested content is found, the content provider’s node is immediately sent to the requester’s node. If the neighboring nodes do not contain them, then the content server will respond. So, fetching the content from the server affects the network performance, such as minimizing cache hit ratio, increasing latency, and minimizing cache utilization. To overcome these issues, we designed DCCMS; let us assume a binary matrix to check whether the content is cached from the or , where means that the content cached at and Cache hit is the probability of cacheotherwise, it means the content is missed at . Moreover, each element takes value or in binary cache decision matrix, such as

According to the above discussion, if the content is missed at due to limited capacity, then the content must be cached at the to enhance the cache performance. For more details, see Section 4.

3.4. Problem Formulation

In this section, the DCCMS is aimed at reducing the latency to get the maximum cache hit ratio. We assume that each node assumes equal contents, and the content size is normalized to 1; for example, each node in the network can cache contents. The content popularity is changed with respect to the given time slot . So, content with the higher popularity value must be detected first and cached at the neighboring nodes to reduce the latency, and the other low popularity value’s contents will be evicted to enhance the locality cache hit rate. The problems can be formulated as

Constraint in Equation (3) describes neighboring nodes’ content placement according to storage limits. The desired content is cached from one node rather than multiple nodes by constraint Equation (4); if the current state , the content will be cached at the server node. The cache decision attribute is restricted in Equation (5). According to the above discussion, we introduced the DCCMS (see in Section 4) that keeps the most popular content during and distribute it with other nodes to maximize the cache hit ratio and improve the overall network performance.

4. Dynamic Cooperative Cache Strategy

4.1. Cache Management

We design the two-level cache strategy to get the desired content in VNDN. Therefore, we define the chunk-layered weight function for a cache as

where represents the chunks layered, indicates highest value of content, and illustrates the control variable for each packet layer. Let assume that , where represents the -layer packet and is highest layer packet for . In the VNDN network, the requester node can get the desired contents from the neighboring node ( or ) as discussed in Algorithm 1. If the desired content is arrived at and is the slate time, then the content capacity is ; then, . The content is moved to node either cache time expired or has no more content capacity for new content.

Moreover, the influence node considers the key node of the network that can enhance the performance, but the selection of cache nodes in dynamic traffic is very complex. However, we choose the as the influence node, the details of getting the cache node are discussed in Section 4.2.

4.2. Caching Node

We choose the cache node based on node centrality methods [46] and social characteristics based on Jaccard similarity indexing to measure the most relative influence node in the network [51]. Let be the adjacent matrix of the network. The contents of take if the connected to the , otherwise . Moreover, the exponential sign of shows the link between two nodes via the intermediate node. For example, in adjacent matrix if the contents , it shows that the and are linked with the distance length . Thus, the is measured as

where is the centrality of , at location of matrix , indicates the numbers of degree connection between above two nodes, and represents the weight factor value. We can also calculate the dynamic communication between the nodes as where is the identity matrix and is the time points series, . Finally, by using Equations (6) and (7), we can calculate the centrality effect of regarding and request dynamically in the whole network.

where indicates the broadcasting and illustrates the receiving requests. Furthermore, is the dynamic communication weight count at length from to .

4.3. Content Replacement

We design Algorithm 2 for the replacement content based on content’s popularity which concerns . We divided the cache into two queues: (1) the most popular content and (2) the lowest popular content . First, the consumer node checked its own CS-table. If the content is available, then cache the content and update the CS table. If the desired content is unavailable, it forwards the request to nearby nodes or caching nodes, selected based on centrality (see in Section 4.2). After receiving the desired content, if the CS is full, then the following method is used to evict the content from CS.

where indicates the requested content within time slot and represent the total numbers of request within giving time slot and is the popularity increase constant.

Initialize:
Output:
1 if is available at L1 then
2 
  
  Set
3 else
4 
5 ifthen
6 
7 else
8 
9 if is not cached at then
10 Request forward to node
11 if Q is full then
12 Initiate Algorithm-(2)
13 End Procedure
Initialize:
Output: Content Replaced
1 ifthen
2 share copy of MPC into L2; // Using Eq.(6)
  Evict LPC;
3 if Q is full then
4 Calculate ;
  Evicted LPC content;
  Allocate space to new content;
5 End Procedure

5. Simulation and Performance Evaluation

5.1. Simulation Scenario

We consider the bidirection road on two lanes with the 15 km pathway. The network size varies from 30 to 90 vehicles, and the velocity is set from 5 to 20 meters per second (m/s) with 20 RSUs. The simulation time is set from 100 to 1000 seconds and run 100 times to validate the results. Moreover, the simulation parameters are mentioned in Table 2.

5.2. Performance Evaluation

We evaluate the DCCMS performance using the ndnSIM [52] based on NS-3 [53]. We compare our work with the AlwaysCache [45], Prob(0.5) [48], DPC [49], and PCMP [50]. The overall results show that our proposed method DCCMS is good compared to other methods. We improve the results with the master node that collaborates with other nodes to get MPC in the whole network and distribute it when needed.

5.3. Simulation Results

Cache Hit: the probability of cache hit from a cache node rather than server node, which is measured as

where represents the cache hit ratio, indicates cache hit, and shows cache miss ratio, whereas, and count the numbers of cache hit and cache miss, respectively. Figure 4 shows the results of the cache hit ratio, in which the study DPC claims that, initially, the cache hit ratio of DPC is lower than those PCMP, AlwaysCache, and Prob(0.5), but with the increasing time, the cache hit performance is improved. We analyze that our proposed work achieved performance cache hit an average of 68% from Prob(0.5), 65.5% from DPC, 55.8% from AlwaysCache, and 21.6% from PCMP. We obtained this result because our cache strategy collaborates with other nodes, and all requested contents are cached at nearby nodes in the network.

Latency is the time delay to obtaining the successful response, which can be measured as

where indicates the latency, is the consumer request time, and represent the producer response time for time for specific content. As per Figure 5, the DCCMS achieves the performance regarding the shortest latency as compared to mentioned other schemes. DPC shows higher latency than AlwaysCache, Prob(0.5), and PCMP. We minimized the latency rate overall on average 42% from other state-of-the-art schemes, with 67% from DPC, 51% Prob(0.5), 28% from AlwaysCache, and 21% from PCMP.

Average hop is the distance between node and another node , which is calculated by

In Figure 6, the results show that the proposed scheme DCCMS outperforms other strategies. We decreased the hop count with the master node that cooperated with other nodes using most popular content. However, our proposed method decreased the average hop count by 53%, 50%, 49%, and 40% from Prob(0.5), AlwaysCache, DPC, and PCMP, respectively.

Content cache utilization is obtained by our proposed work, which indicates how often the content is used after being cached. We can obtain it as

where indicates the cache hit probability and represents the caching event’s total time. Cache utilization is an important factor in getting better cache hit results and improving network performance. Figure 7 shows that our scheme is slightly low compared to DPC with 30 vehicles but better than AlwaysCache, PCMP, and Prob(0.5). Moreover, with the increase in network size, such as 50, 70, and 90 vehicles, our proposed work utilized the maximum cache averagely of 55% overall, compared to other strategies DPC, AlwaysCache, Prob(0.5), and PCMP.

In diversity, the unique content ratio is stored on all nodes in the entire network. We can obtain it by the average number of networks’ nodes that failed to cache the content each time it is divided by the total nodes of the whole network.

Figure 8 illustrates that Prob(0.5) diversity is very low compared to other schemes. This happened due to higher content replication. PCMP and DPC perform much better than AlwaysCache and Prob(0.5). However, the DCCMC improves overall diversity performance on average 39% rather than baseline strategies. We achieved the diversity ratio with minimum content replicas. In addition, the other strategies generate too much content redundancy that causes the low diversity in the network.

Server load ratio indicates that the content is satisfied at server node rather than other nodes in the whole network. The server load can be calculated as

Figure 9 indicates that our scheme is better in terms of cache missing and decreases the server load by 30%, 28%, 27%, and 18% from Prob(0.5), DPC, AlwaysCahe, and PCMP, respectively, when time is increased. We achieved the performance due to catching the consumer’s content at nearby nodes.

5.3.1. Effect of Velocity

We analyze the effect of velocity on the network performance with different vehicle velocities, using 50 vehicles, and set the 100 sec simulation time. Figures 911 indicate the cache hit ratio, latency, and average hop count performance, respectively, with different velocities from 5 m/s to 20 m/s. Furthermore, we observe that the overall performance of our DCCMC scheme is best than those of other methods due to the master node, which mitigates the impact of velocity on the network and cooperates with the neighboring node to cache the content.

In Figure 10, the results show that the cache hit ratio of all strategies decreases when vehicle speed increases. However, we observed that the performance of DCCMC is better and less affected with the increase of velocity. We achieved performance of 47% from DPC, 24% from Prob(0.5), 23% from AlwaysCache, and 21% from PCMP.

Figure 11 shows that the performance of DCCMC is best than those of other strategies. We decrease latency by 69%, 42%, 63%, and 10% from DPC, Prob(0.5), AlwaysCache, and PCMP, respectively.

Compared with the other methods in terms of average hop count, we notice that the results shown in Figure 12 illustrate that our DCCMC scheme reduced the hop count on average, 45% from Prob(0.5), 44% from AlwaysCache, 43% from DPC, and 35% from PCMP.

6. Conclusion

VNDN plays a key role in content delivery in vehicular networks. This study proposed a novel two-level cache strategy to overcome the popular content delivery in high mobility situations. We introduced the master node that cooperates with other networks node based on content popularity in the dynamic environment. The caching node is chosen based on the centrality and social interaction with other nodes. The simulation results show that our DCCMS scheme is efficient rather than other state-of-the-art strategies. DCCMS is better when the higher cache hit, less hop, less redundancy, more cache utilization, and less latency are the genuine concern in the networks. In the future, we plan to investigate forwarding schemes and the replacement content in both static and dynamic environments to improve the network performance based on the most frequent nodes in the network. Moreover, we will design caching strategy for the Internet of things (IoT) to optimize cache and improve our scheme according to the problem statements.

Data Availability

The simulation data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research study is funded by the National Natural Science Foundation of China under Grant 61772385. The authors acknowledge this financial support.