Abstract

In real network environment, nodes may acquire the communication destination during data transmission and find a suitable neighbor node to perform effective data classification transmission. This is similar to finding certain transmission targets during data transmission with mobile devices. However, the node cache space in networks is limited, and waiting for the destination node can also cause end-to-end delay. To improve the transmission environment, this study established Data Transmission Probability and Cache Management method. According to selection of high meeting probability node, cache space is reconstructed by node. It is good for nodes to improve delivery ratio and reduce delay. Through experiments and the comparison of opportunistic network algorithms, this method improves the cache utilization rate of nodes, reduces data transmission delay, and improves the overall network efficiency.

1. Introduction

Opportunistic network is a type of multihop wireless network. It has emerged in recent years [1]. The key for distinguishing the features of opportunistic networks from ad hoc [24] is that an end-to-end path will never occur. However, the union of networks may present an end-to-end path at snapshots over time. The application areas of opportunistic networks include military communications [5], interplanetary networks [6], networks in underdeveloped areas [7], field tracking [8], and disaster rescue [9].

In opportunistic networks, the traditional algorithm paradigm for the Internet and ad hoc networks, where routing algorithms are computed based on topological information, becomes inadequate. The first approach to routing in opportunistic networks is a variation of controlled flooding. All messages are flooded and are limited by time to live, and then messages are delivered to their destination. This approach contacts the node that is receiving the message during flooding. Several advanced proposals have replaced topological information with higher-level information while attempting to limit flooding cost.

For social network application scenarios, data take up significant cache space device because people use portable mobile devices during data transmission and no suitable transmission range target is responding, which eventually causes transmission delay. On the one hand, a number of pieces of awaiting information are stored in the device. Some information may be stored for a long time without user acceptance and response status. On the other hand, new data is received and emergency information is released in real time, because the cache space is big, resulting in the storage of new data [10, 11].

To solve these problems, this work presents an Information Transmission Probability and Cache Management (ITPCM) method based on node data information cache. The algorithm is based on the node that can identify surrounding neighbors to evaluate nodes between the meeting probabilities, which will cache data distribution adjustment, ensure the high probability of node preferential access to data, and achieve the objectives of cache adjustment. Meanwhile, to avoid deleting cached data, the cache task of the node is shared through the neighbor node writing method, and the effective data shunt is performed.

The contributions of this study are as follows:

By analyzing the relationship between nodes, the probability of meeting the neighbor is evaluated.

The list of nodes is sorted after evaluation and node cache reconstruction.

Through the effective caching adjustment method, this algorithm can improve delivery ratio, and then the delay of end-to-end data transmission is reduced.

Research on opportunistic networks currently focuses on routing algorithms. Existing routing algorithms can be used in different areas through improvement. Some methods adopted in opportunistic networks are as follows.

Grossglauser and Tse [12] suggested a store-and-forward mechanism Epidemic algorithm that simulated the transmission mechanism of infectious diseases. In this algorithm, two nodes exchange a message that is not stored by the other when they meet. This method is similar to exclusive or transmission (EX-OR) and allows the nodes to obtain additional information. The route where a node reaches the destination node and transmits the message can be guaranteed to be the shortest by increasing network bandwidth and buffering memory space. In real applications, however, congestion can occur in the message transmission network as the number of nodes included in the transmission increases, given that related resources in real networks are limited. In actual applications, this method cannot obtain good result due to the limitation of resources.

Wang et al. [13] proposed the spray and wait algorithm based on the Epidemic algorithm. This algorithm consists of two phases, namely, spraying and waiting. The source node initially counts the available nodes around it for message transmission and then transmits its message to the nodes through spraying. In the waiting phase, the message is transmitted to the destination node through direct delivery to fulfill the transmission process if no available node can be found during the spraying phase. This method is a modified algorithm that improves the flood transmission feature of the Epidemic algorithm. Furthermore, the spraying phase may waste source nodes if a huge number of neighbor nodes that consume considerable space exist in the source nodes. Hence, this algorithm can cause the death of nodes by randomly overspraying source nodes in several networks.

Spyropoulos et al. [14] recommended the PRoPHET Algorithm. This algorithm improves the utilization of a network by first counting the available message transmission nodes and then calculating the appropriate transmission nodes to form message groups. Leguay et al. [15] established the MV algorithm based on the probability algorithm. This algorithm calculates transmission probability based on records and statistics in the meeting and area visiting processes of the nodes.

Burgess et al. [16] presented the MaxProp algorithm based on array setting priority. This algorithm features determining the transmission sequence according to a settled array priority when two nodes meet. This method decreases the consumption of resources and the efficiency of the algorithm is improved by arranging a reasonable sequence for message transmission. Leguay et al. [17] suggested the MobySpace algorithm. In this algorithm, node groups or pairs with high relevance form into a self-organizing transmission area to realize optimal communication among nodes.

Burns et al. [18] recommended the context-aware routing algorithm based on calculating the transmission probability of the source nodes reaching the target nodes. This algorithm obtains the middle node by calculating the cyclic exchange transmission probability and then collects and groups messages to guide the middle node in transmitting messages directly to the node with higher transmission probability.

Kavitha and Altman [19] presented the message ferry algorithm, which refers to the grouping and transmission of messages. This algorithm classifies and groups the messages collected by the source nodes that are going to be transmitted and then counts the existing transmission traces for each ferry node in the network. The movement rule of ferry nodes can be achieved. The source node will move to the ferry node automatically during message transmission. Transmission effects can be improved by predicting the node moving trace in the algorithm.

This paper discusses and demonstrates the application of opportunistic networks to social networks based on the analysis and summary of related works.

3. System Model Design

3.1. Analysis Model of Node Connection Status

The capability to forward and cache the messages conveyed by the encounter node becomes robust when the node establishes a strong connection. According to the connections, the running time of the nodes in the network can be divided into connection interval time and duration. The following analyses can be obtained by examining the state of the node.

It is assumed that the nodes in the network exhibit independent and distributive properties, and the motion state of the nodes is unaffected by the motion state of other nodes. Therefore, the connection events in the nonoverlapping time domain are independent of each other; the constraint conditions are expressed in

Among the constraints, and represent the connection probability between nodes and at times and , respectively.

Given that the nodes that meet other nodes can be described as the average for Poisson distribution, the node of the connection state is related to its connection strength and time interval monitoring. In the time interval [], the connection probability of node can be expressed in

Among the constraints, indicates that node has established a connection in . is the connection strength, and is the high-order infinitesimal of .

The nodes continuously establish two or more connections with a minimal probability event at a brief time interval, as shown in

In (1), (2), and (3), the connection is established among the nodes for a random event at a given period, which is equivalent to the number of Poisson processes; then, unknown nodes establish two connection intervals by exponential distribution [5]. In addition, relevant studies show that the duration of node connection is subdivided [8]. Node connection is established by the state analysis model. The node can be analyzed by the specified message node to transfer service capability to estimate the message transmission probability and to provide the decision basis for forwarding and deleting a message.

3.2. Calculation Method for Data Transmission Probability

The arrival strength of the node connection reflects the strength of the node service capability. The duration of node connection demonstrates a robust randomness and is limited by media-sharing characteristics. In addition, a link conflict is observed among the nodes. Therefore, the service capability of the node should fully consider the probability of connection arrival strength and fluid connection availability.

The nodes in the network can be analyzed by the established analysis model of distributed connection status. The average connection of node is determined at interval time and can be established by the local record of connections between and the current system running time , as shown in

Furthermore, the connection of node can be obtained from strength , as shown in

For node , connection time is determined by three parameters, namely, the connection setup time , connection broken time , and connection time , as expressed in

According to the historical information of local recording, the average connection duration of node can be obtained, as presented in

Apparently, the service nodes within a given time interval rate are directly related to service node number. The fast rate of service nodes with the same connection strength can service a considerable number of nodes, can cache a considerable number of messages, and can show the robust capability of service nodes. The service rate of node can be connected to the average duration of obtained , as provided in

The node connection at time in the established state probabilities and in the off-state probability through nature and differential equation of in the node queue model satisfies the following:

The connection state of the node at any time must have one established connection and disconnection, namely, and , respectively, to satisfy the regularity, as expressed in

The density of the network node distribution is relatively sparse, and the initial time node in the system changes from static to a critical state. Hence, any node and other nodes are connected to a minimal probability event, and the availability is shown in

According to the nodes that meet other nodes which can be described as the average for Poisson distribution, the node of the connection state is related to its connection strength and time interval monitoring [5]. Equations (9), (10), and (11) display the connection probability of node connected at time , as presented in

The operating state of the node transition set integrates the elements with the interoperability; thus, the stationary distribution of exists. So,

According to (3) is high-order infinitesimal [8], from ,  . So, the network at a steady state time point connection may be revealed in the off-state probability, including for the load intensity of node , as shown in

A new node connection at any time should satisfy the basic condition that the node connection is in a disconnected state. Therefore, the connection probability of node at any time in a disconnected state is equivalent to the fluid availability of connection , as presented in

The service force of node can be obtained, as expressed in

The message delivery status is related to the relay node that transmits the message. The service capability of the relay node that has been stored in the message should be considered when estimating the delivery probability of the message. In this study, the relative service capability of each node in the network can be obtained according to the proposed method of node service capability, as provided in

denotes the maximum node service capability, which can be obtained through information interaction among the nodes. Considering the relative service capability of relay nodes in the message, the delivery probability can be obtained, where , and is the number of paths stored in the message transmission. The large equates to a high probability of message delivery and to continuously reduced forwarding and storing of the message.

In this study, the basic principle of network news transmission is used by using node communication opportunities, interaction in the news propagation path among the nodes, and node information service capability. After a brief convergence time, the node can approximately obtain the transmission path of local news and the service capability of each node when the network is stable.

3.3. Information Delivery and Cache Management in Algorithm

The node in opportunistic network has a strong social attribute, which not only needs to cache the information it generates but also provides cache and forwarding services [20]. However, to ensure that the messages they generate are delivered successfully, nodes usually show a certain degree of selfishness; that is, the messages generated for themselves are given higher caching priority. When many news networks exist, the node easily obtains news of storage, carrying, and forwarding, and other nodes produce a fewer cache spaces for news distribution, which causes serious cache competition problem. Therefore, when designing the cache management strategy, you should consider the source of the message and cache the resource allocation for the messages it and its nodes generate. Generally, the news source node cache is not lost. The node cache is divided into local and cooperative caching areas, and the news of their local community produces storage node and other nodes generated message. Its structure is shown in Figure 1.

To improve the robustness of the storage and forwarding processes and effectively reduce the load, in view of the news from different partitions, the cooperative cache replacement source perception of partition method is proposed for different messages utilizing different cache area replacement methods. The basic process of this method is as follows:

In the initial state, a higher cache replacement and forward priority is set for the local cache area messages, and lower priority is set for the message in the cooperative cache area.

When nodes meet, they perform cache replacement or message forwarding. The messages are copied or transferred from the local cache to each other on the basis of the importance of the message. The structure is shown in Figure 2.

Nodes replicate the local cache area message sets and and then extract the cooperative cache area messages and if cache space is available. Then, the probability of the target node interaction of both sides reaches news and , respectively, and higher probability of messages is found in each other, that is, from , , , and , and separately extracted and . Finally, the node copies the message in the local cache node to the other party in the buffer. If the other party still has cache space, then node collaboration will cache the message directly to the other party in the cooperative cache area.

According to the above method, we propose a distributed cache management method for nodes in the opportunistic network, and processes are as follows.

Step 1. If the meeting node is the destination node of the message, then send the message directly to the other node.

Step 2. If the meeting node is not the destination node of the message and has free cache space, then the cache is replaced according to the aforementioned cache substitution method. People on both sides of the nodes on the basis of the estimated acquisition and message between the destination nodes encounter probability. Combined with the importance of the news, priority will be on the nodes in the local cache with higher message copied to the other party on the importance of the cooperative cache area. If a cache space is still available, then the two nodes in the cooperative cache area will continue to be transferred to each other’s cooperative cache areas.

Step 3. If the destination nodes is not news, then the node will use the proposed distributed cooperative caching transfer method for nodes within the local cache information transfer when the cache is full. The node selection dynamic collaboration node set first determines the collaboration in the neighbor node. Then, on the basis of the message’s importance, important news is broadcasted to the collaboration node and the message from the local deletion.

Step 4. In the distributed cooperative caching transfer method, if the node is not within the scope of communication to find the right collaboration node, then the importance of the message priority deletes nodes with low levels of local cache area important message.

Step 5. Collaboratively cache the return of messages. The nodes with the collaboration of the message are cached to store the node after meeting. If surplus cache space is available, then the local node at this time will be the two node sides with the message meeting probability of the destination node. If the other person has much time, then the message is redirected to the local node, where the node deletes the message from its cache. Otherwise, do nothing.

From above, we may establish an algorithm to explain this method.

In Algorithm 1, time complexity is . Because messages transmission is the list which can be defined from 1 to , we may compare with spray and wait algorithm. In spray step, it must select neighbor spraying and then store messages. The time complexity is . In Epidemic algorithm, messages can transmit directly when they meet. So the time complexity is .

Algorithm: Information Cache Management and Transmission
Input: node , node , cache , cache
Output: message List, probability
Begin
Create meeting list
While node and node meet
If (node . is TargetNode())
   send message to node;
   Else if (!node .isTargetNode() and !cache is full() and ! cache . isfull())
    According to evaluate and message importance news;
  Copy message from cache to node
  End else if
End if
  If (!node .isfull())
    If ()
    Then transfer cache .getMessage() to cache
  End if
  Else if (!node .isTargetNode() and node .isfull())
      then set location is node .selectCooperationNodeSet()
         broadcast some location of message to location.isNeighborNode()
      cache .deleteBroadcastMessage()
      If (location.isEmpty())
      Then cache .deleteSomeMessage()
      When node meet node of transfer message
  End else if
End if
   If (!cache .isfull())
  Probability = node .meetMessageTargetNode()
  Probability = node .meetMessageTargetNode()
   End if
    If ()
      Transfer cache .getMessage() to cache
     cache . deleteTransferMessage()
     Output Message
End If
End

4. Simulation

4.1. Simulation Environment

The simulation adopts the tool Opportunistic Network Environment simulator version 1.56 to text in the real environment and to analyze the performance in the ITPCM. This tool adopts different mobile models to describe the mobile locus and to record data transmission grouping and then accordingly adopts model of SPMBM (Shortest Path Map-Based Movement) in simulation. In this study, the ITPCM is compared with the following classical algorithms:(1)Epidemic-TTL routing algorithm (TTL = 60 min).(2)Spray and wait routing algorithm (copy = 10).(3)Spray and wait routing algorithm (copy = 30).

In addition, the simulation adopts an open street map to edit city maps. Different parks, streets, and shops are established in the map. They can exhibit a real environment. Figure 3 is simulation map. It adopts real map in Helsinki.

The parameters can be settled based on the random models social networks. The parameters adopted in the experiment are set as follows. The simulation time is 1 hour to 6 hours, and the simulation area is 4500 m × 3400 m in the map. The involved nodes are 350. The transmission pattern is broadcast, the maximum transmission area of each node is 10 m2, and the sending frequency of a data packet is 25 s to 35 s. The data packet type is random array. Moreover, a node consumes one unit energy when it sends a data packet. Each node carries 10 data packets, and the transmission pattern of nodes is a social model. Furthermore, the transmission speed of the node is 0.5–1.5 m/s, and the cache of each node is 5 MB.

4.2. Parameter Analysis

In opportunistic networks, delivery ratio, overhead, and delay are very important parameters in research, because those parameters can decide network performance. Moreover, energy consumption can judge how many data can be transmitted by nodes. So, we select those parameters in simulation.

ITPCM is compared with types of classical algorithm mentioned in Section 4.1 to verify its performance. This study focuses on the following parameters:(1)Delivery ratio: this parameter refers to the probability of selecting a relevance node in the transmission process.(2)Overhead on average: this parameter shows the overhead between two nodes when information is transmitted.(3)Energy consumption: this parameter records energy consumption with nodes in the transmission process.(4)Average end-to-end delay: this parameter concludes the delay of route seeking, waiting delay in the data classification queue, transmission delay, and redelivering in MAC.

4.3. Simulation Results

Figure 4 shows the relationship between the delivery ratio and the cache. As shown in Figure 4, the ITPCM algorithm has the highest delivery ratio among all algorithms, reaching 0.72–0.95. This is because the ITPCM algorithm uses the node cache to share the task transmission method. When increasing the node cache, the delivery ratio of the algorithm is obvious. This method can increase the cooperation between nodes. The spray and wait routing algorithm (copy = 30) has the lowest transmission delivery ratio, with only 0.22–0.47. This is because the algorithm uses the flood form to carry out information transmission to the node of the community, resulting in the loss of a lot of information. The page-TTL routing algorithm (TTL = 60 min) and the spray and wait routing algorithms (copy = 10) improve the delivery ratio of the original algorithm by increasing the condition of the information transfer, making the delivery ratio more than 50%. As shown in Figure 4, the ITPCM algorithm greatly improves the delivery ratio of information under the condition of increasing the cache.

Figure 5 shows the relationship between routing overhead and cache. In Figure 5, the four algorithms receive a cache increase, and the routing overhead is greatly reduced. In the ITPCM algorithm, the routing overhead dropped from 210 to 23. Because enough cache by cooperation nodes affords data transmission, then information can be delivered by nodes very soon. The routing overhead of the spray and wait routing algorithm (copy = 30) is reduced from 260 to 150. The routing overhead of the spray and wait routing algorithm (copy = 10) decreased from 280 to 140. Because of overtransmission, the reduction of the three improved algorithms is extremely obvious, indicating that the larger the node cache, the smaller the node cost. The pin-TTL routing algorithm (TTL = 60 min) is less than spray and wait routing algorithm, but it is also affected by the cache. As a result, the node cache can effectively improve the routing overhead in the community.

Figure 6 shows the relationship between end-to-end delay on average and cache. All nodes transport delays decrease as the cache increases. The pin-TTL routing algorithm (TTL = 60 min) controls the time interval of transmission information from 254 to 76, and the spray and wait routing algorithm’s (copy = 30) transmission is basically the same as the pin-TTL routing algorithm (TTL = 60 min). The spray and wait routing algorithm (copy = 10) is reduced from 256 to 117 because of the fast packet time frequency and low delay. The ITPCM algorithm reduces latency from 251 to 48, which shows that the increasing node cache can effectively improve the node transmission delay, especially in the process of data transmission, by using dynamic allocation and adjusting the cache ITPCM, which helps control information transmission delay further.

Figure 7 shows the relationship between energy consumption and cache. With the increase of the cache, the reduction of energy consumption of the ITPCM algorithm is gradually reduced, and the energy consumption of the ITPCM algorithm is retained by 45% during the 6-hour communication time. Because cooperation can afford data transmission, some nodes can exchange data and keep waiting status. So, those nodes may save energy and await next mission. The spray and wait routing algorithm has the largest energy consumption, because every node in this algorithm has to transmit information to all neighbors in the community through “spraying,” resulting in high energy consumption. Specifically, copy = 30 consumes more than 90 when the cache reaches 40 MB. The algorithm adopts the meeting and passing method and copies information from a single copy, which is better than the spray and wait routing algorithm in terms of energy optimization.

5. Conclusion

This work presents an Information Cache Management and Transmission (ITPCM) algorithm based on node data information cache. The algorithm is based on the node that can identify surrounding neighbors to evaluate nodes between the project probability, which will cache data distribution adjustment, ensure the high project probability of node preferential access to information, and achieve the objectives of cache adjustment. Meanwhile, to avoid deleting cached data, the cache task of the node is shared through the neighbor node writing method, and the effective data shunt is performed. In the future work, this method can adapt to big data environment to solve the problem in transmission.

Parameter Symbols

:The connection probability between nodes and at time
:The connection strength
:Connection setup time
:Connection broken time
:Connection time
:Node queue list
:Node connected at time
:Average value of location
:The maximum node service capability
:Local cache area
:Cooperative cache area.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by Major Program of National Natural Science Foundation of China (71633006); The National Natural Science Foundation of China (61672540, 61379057); China Postdoctoral Science Foundation funded project (2017M612586); The Postdoctoral Science Foundation of Central South University (185684).