Security and Communication Networks

Security and Communication Networks / 2020 / Article
Special Issue

Theory and Engineering Practice for Security and Privacy of Edge Computing

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8897282 | https://doi.org/10.1155/2020/8897282

Haiping Huang, Qinglong Huang, Fu Xiao, Wenming Wang, Qi Li, Ting Dai, "An Improved Broadcast Authentication Protocol for Wireless Sensor Networks Based on the Self-Reinitializable Hash Chains", Security and Communication Networks, vol. 2020, Article ID 8897282, 17 pages, 2020. https://doi.org/10.1155/2020/8897282

An Improved Broadcast Authentication Protocol for Wireless Sensor Networks Based on the Self-Reinitializable Hash Chains

Academic Editor: Honghao Gao
Received20 Jun 2020
Revised24 Jul 2020
Accepted31 Jul 2020
Published01 Sep 2020

Abstract

Broadcast authentication is a fundamental security primitive in wireless sensor networks (WSNs), which is a critical sensing component of IoT. Although symmetric-key-based TESLA protocol has been proposed, some concerns about the difficulty of predicting the network lifecycle in advance and the security problems caused by an overlong long hash chain still remain. This paper presents a scalable broadcast authentication scheme named DH-TESLA, which is an extension and improvement of TESLA and Multilevel TESLA, to achieve several vital properties, such as infinite lifecycle of hash chains, security authentication, scalability, and strong tolerance of message loss. The proposal consists of the -threshold-based self-reinitializable hash chain scheme (SRHC-TD) and the -left-counting-Bloom-filter-based authentication scheme (AdlCBF). In comparison to other broadcast authentication protocols, our proposal achieves more security properties such as fresh node’s participation and DoS resistance. Furthermore, the reinitializable hash chain constructed in SRHC-TD is proved to be secure and has less computation and communication overhead compared with typical solutions, and efficient storage is realized based on AdlCBF, which can also defend against DoS attacks.

1. Introduction

With the rapid development of Internet of Things (IoT) and 5G technology, the number of sensing terminals, such as various sensor nodes and tiny IoT devices, has also increased dramatically [13]. Edge computing is a new emerging paradigm that overcomes the scalability problem of traditional wireless sensor networks (WSNs) architecture [47]. The combination of wireless sensor networks and edge computing can more effectively deploy the network and process a large amount of sensory data from sensor nodes.

In hostile and harsh conditions, such as large-scale agricultural monitoring and homeland border detection, sensor nodes are usually deployed to the monitoring area by aircraft, and the base station may be a temporarily deployed edge server such as mobile weather station or UAVs (unmanned aerial vehicles for agricultural surveillance), whose computation and storage capacities are not always powerful. These sensor nodes are difficult to recycle and need to be replenished after damage or exhaustion. For effectively acquiring and perceiving data from massive sensors, the base station (or edge server) usually sends commands or application updating data packets to vast sensor nodes through broadcasting. It is necessary for sensor nodes to authenticate the identity of the sender, together with the validity and integrity of these messages [811]. Thus, the broadcast authentication becomes an essential service in practical and secure wireless sensor networks or IoT. The broadcast authentication protocol in wireless sensor network needs to meet the following three principles [12]: (1) any malicious receiver being able to hardly forge any packet from the sender; (2) low communication, computation, and storage overheads; and (3) tolerance of message loss or fault.

Based on the three principles, many broadcast authentication protocols applied in WSNs have appeared sequentially. TESLA [13] is one of the most representative broadcast authentication protocols. The contributions of TESLA lie in the low computation strength and high authentication speed by using the special asymmetry mechanism, which is realized by delaying the disclosure of symmetric keys. Subsequently, an improved version named Multilevel TESLA [14] has been proposed to extend the capability of TESLA. With the continuous concern regarding broadcast authentication, some novel protocols [1519] sequentially emerge in WSNs, followed by the Multilevel TESLA protocol.

Although these protocols are distinguished from each other, most of them are the enhanced versions of TESLA or Multilevel TESLA, which contribute to the safety and efficiency by improving the organization form of broadcast authentication. However, the disadvantages of them cannot be ignored. First, a traditional asymmetric mechanism, such as non-light-weight public key cryptography (PKC) in [20], is not encouraged to be used in broadcast authentication due to its high computation overheads and impracticability in resource-constrained sensor networks. Second, the light-weight hash key chain is adopted by most of these protocols. However, an overlong hash chain may lead to the increase of storage overhead and information divulging risk on the base station, especially in the edge computing scenario, and a short one will be consumed quickly, because the running time of the network is uncertain and the entire lifetime of a large-scale network is hard to predict. Third, a long hash chain has security risks itself. Håstad and Näslund [21] show that, in a hash chain composed of the same hash functions, the attackers inverting the -th iteration are actually times easier than inverting a single hash function. Kwon and Hong [22] find some future keys of TESLA by utilizing time-memory-data-tradeoff technique in a 64-bit hash chain.

Furthermore, in complex and dangerous environments, it is necessary to authenticate the fresh node supplemented in edge computing scenario. The absence of the trusted authentication interactions is easy for the vicious nodes to create fake data packets. In a severe case, the base station will suffer DoS attacks as an authentication centre. Similarly, the sensor nodes also need the ability to quickly distinguish valid messages from fake ones against DoS attacks. This means that these protocols cannot achieve a satisfactory security level.

In order to address the above problems in the existing protocols, further improve the security, and broaden application scenarios, this paper puts forward a reformed broadcast authentication protocol named DH-TESLA. The main contributions of our proposal can be described as follows:(i)We design a -threshold-based self-reinitializable hash chain scheme (SRHC-TD), which constructs a reinitializable hash chain securely to improve the lifecycle of hash chains while keeping a desirable efficiency. In our scheme, hash chains will continue to generate without the need to predict and determine the lifetime of the network in advance. This scheme also has a strong tolerance to message loss.(ii)A -left-counting-Bloom-filter-based authentication scheme (AdlCBF) is proposed, which can handle the secure authentication of fresh sensor nodes. The AdlCBF scheme will effectively achieve the demands of authentication speed, memory space, and data security with the increasing number of sensor nodes joining the network. This scheme ensures that our protocol is well scalable for a large-scale network, and it can effectively resist DoS attacks on the base station caused by the request of massive illegal nodes.

The remainder of this paper is organized as follows. Section 2 gives the overview of related work. Some notations and concepts are defined in Section 3. The SRHC-TD scheme and AdlCBF scheme are described in Section 4 and Section 5, respectively. In Section 6, we describe the DH-TESLA protocol in detail. Section 7 illustrates the security analysis. The evaluation and comparison of the proposed schemes are described in Section 8. Section 9 concludes this paper.

2.1. Hash Chain

The hash chain was first proposed by Lamport in 1981 and has been widely used in various applications due to its high security and efficiency, including one-time password (OTP) system [23] and broadcast authentication [24]. However, after all hash values of a hash chain are consumed, a new hash chain must be generated, which is expensive in most applications. For example, in OTP system, the commitment of a new hash chain and corresponding parameters need to be reregistered to the server or the client, which will consume significant computation and communication overhead [25]. In TESLA protocol, the problem of unicasting the initial parameters on a node-to-node basis is quite expensive [24]. In order to solve this problem, many schemes have been put forward.

Bicakci and Baykal [26] and Di Pietro et al. [27] employed public key cryptography to generate a hash chain. Although the problem of limited length was solved, the computational overhead was increased (one public key operation takes hundreds of times as long as one hash operation). Park [25] constructed an infinite length hash chain by employing multiple short hash chains for OTP system. Nevertheless, this scheme cannot be used in broadcast authentication because the hash chain will be broken if any authentication message is lost.

Goyal firstly proposed the reinitializable hash chain (RHC) scheme [28], whose main idea was that when an RHC is exhausted, a new RHC can be regenerated safely and undeniably. In 2006, Zhang and Zhu put forward the self-updating hash chain (SUHC) scheme based on Hard Core Predicate algorithm [29]. The main idea of SUHC is that it distributes the first chain’s every key value along with one bit of the second one’s commitment. In this manner, the receiver would gain all bits of the second chain’s commitment when the first one is exhausted. On the basis of [29], Zhang et al. designed the self-renewal hash chain (SRHC) scheme [30] as the improvement of SUHC, which has a different selection algorithm of random numbers. Xu et al. also proposed a self-updating one-time password (SUOTP) mutual authentication protocol in a similar way [31]. However, in all these schemes, the commitment can be reconstructed if and only if all the random numbers were received integrally.

2.2. Broadcasting Authentication Protocol in WSNs

TESLA, originated from TESLA, was developed for source-constrained networks [13]. However, it did not overcome the problems that TESLA protocol experiences with mobile nodes losing the authentication packets caused by high velocities, requirement of loose time synchronization, limited length of hash chain, and reliability. Therefore, lots of solutions were proposed in order to address the abovementioned problems.

Liu et al. [16] proposed a Scalable TESLA that introduces the use of Merkel hash tree in TESLA for the distribution of initial parameters and commitments and also enhances scalability by increasing the number of senders. Liu and Ning proposed Multilevel TESLA to extend the capability of TESLA in three aspects [14]. Firstly, it predetermines and broadcasts the initial parameters rather than unicasting them by point-to-point authentication used in TESLA. Moreover, it adopts the multilevel hash chain to distribute broadcast messages, which prolongs the usage cycle while not increasing the length of hash chain compared with TESLA. Finally, it uses redundant message transmission and random selection strategies to distribute key chain commitments, which improves the survivability against DoS attacks.

Recently, Al Dhaheri et al. [32] proposed a TLI-TESLA based on Multilevel TESLA, which reduced the delay between the sender and the receiver. Kwon and Hong presented an extendable broadcast authentication scheme called X-TESLA, which considers the problem arising from sleep modes, networks failures, and idle sessions [22]. Furthermore, a long-duration TESLA [33] was proposed to overcome the finite length of hash chain used in TESLA by employing a hierarchical hash chain.

Apart from like-TESLA protocols, there exist other types of broadcast authentication protocols that can be applied in WSNs. Groza et al. [34] designed a light-weight broadcast authentication for controller area network that achieves immediate authentication at small costs in bandwidth. Shim et al. [35] proposed an identity-based broadcast authentication scheme called EIBAS using ID-based signature. Similarly, a Chebyshev-map-based broadcast authentication is presented by Luo et al., which also uses ID-based signature [8]. However, the overhead of ID-based signature is higher than symmetric primitive. Besides, Bloom filter (BF) and counting Bloom filter (CBF) have been explicitly and widely used in the broadcast authentication schemes [3639] to generate and verify authentication information, realize the public key management when combined with hash chain, and compare multiple MACs (message authentication codes) for reducing the message size. Kim and An employ BF-based source authentication (BFBSA) to achieve the security of packets with variable sizes in WSNs [38]. Bao et al. [39] proposed a light-weight authentication by combining BF and TESLA, which prevents active attacks and adds a privacy-preserving feature for vehicular ad hoc networks.

3. Preliminaries

3.1. Notation

The symbols and notations used in our protocol are listed in Table 1.


SymbolsDescription

Length of hash chain
Output length of hash function
The secret that needs to be shared
Secret shares
, Seed of hash chain
Number of secret shares
, Key authentication code
Current time used to synchronize the time of the whole network
Maximum clock difference between the sender and the receiver
The start time of interval
Duration of each time interval
Disclosure delay
Sensor node
, Private key and public key of sensor node, respectively
Base station
, The addresses of sensor node and base station, respectively
, Random nonce

Additionally, is designed as a message format, where is the message type, is the source address, is the destination address, and represents some additional options.

3.2. Basic Definitions

Definition 1 (partition). Partition is the process of transforming a binary number of bits into -radix numbers. This process can be simply represented as or . If cannot divide exactly, several zeros whose number is exactly the remainder should be filled on the front of the binary number.

Definition 2 (repetition vector, value vector, and repetition degree). Suppose that there are variables, and the value of each variable is in a set containing integers. For a group of these variables, there are different values among these variables, which are denoted as . Then, is called repetition degree and is called value vector. Suppose that there are variables whose value is , such that . Then, is called repetition vector.

Definition 3 (repetition rate). For variables, the values are a set of integers, such that repetition degree is . Then the number of assignments of these variables isThe repetition rate is defined as

Definition 4 (difficulty degree). Suppose that there are integers whose repetition degree is . Then is called difficulty degree.

Definition 5 (average difficulty degree). Suppose that there are integers, whose difficulty degree is and repetition rate is . Then average difficulty degree is defined as the weighted sum of difficulty degree:

Definition 6 (Mignotte’s sequence). Let and be integers; , and . A -Mignotte sequence is a sequence of positive integers , such that, for all and , .

4. (t, n)-Threshold-Based Self-Reinitializable Hash Chain Scheme

4.1. -Mignotte’s Threshold Secret Sharing Scheme

Given a -Mignotte sequence, the scheme works as follows [40]:(1)The secret is chosen as a random integer such that , wherein and .(2)The secret share is chosen by the formula , for all .(3)Given distinct shares , the secret can be recovered using the Chinese Remainder Theorem, and any two such are congruent moduli :

4.2. -Mignotte’s Threshold Secret Sharing Scheme

The proposed self-reinitializable hash chain has multiple phases: initialization, publication, verification, recombination, and self-renewal. A new hash chain can be reinitialized without extra communication when the last hash chain is exhausted. Figure 1 shows the framework of the broadcast authentication process involving the construction of self-reinitializable hash chain.

4.2.1. Initialization

In the initialization phase, the sender and the receiver negotiate the length of hash chain and a secure hash function with a security parameter , which means that the output of is an -bits string, and can be partitioned into . At first, the sender does the following steps.(i)Initialize a seed. Choose an appropriate -Mignotte sequence, denoted as . Unite all the sequential values into and into , for all .(ii)Initialize a hash chain. Consider as the seed and generate a hash chain of length , as shown in (I) of Figure 1:where is an integral multiple of .(iii)Generate the next chain. Same as the first two steps, we choose a new -Mignotte sequence and compute and . Then, we get a new hash chain:

Partition into -radix numbers, denoted as , whose repetition degree is and difficulty degree is :(i)Let and . If and , then select as the secret . Otherwise, return to the previous step.(ii)According to , calculate secret shadows .(iii)Compute the key authentication codes (KAC), denoted as and , where , , , and is the number made up of the -th bit, the -th bit, the -th bit, and so on of . In particular, , , and .(iv)Publish to the verifier securely. Actually, this pair of parameters will be sent cryptographically to the nodes in Algorithm 1 or 2 of Section 5.

(1)for fresh node do
(2):
(3): computes and compares the fingerprint
(4)if the fingerprint is true then
(5)  : , where
(6)else
(7)  Authentication failure
(8)end if
(9)end for
(1)for fresh node do
(2):
(3):
(4): computes and compares the fingerprints and
(5)if the fingerprint is true then
  if the fingerprint is true then
(6)   : , where
(7)   : , where
(8)  else
(9)   : , where
(10)  end if
(11)else
(12)  if the fingerprint is true then
(13)   : , where
(14)  else
(15)   Authentication failure
(16)  end if
(17)end if
(18)end for
4.2.2. Publication

In the phase of publication, the sender computes and distributes hash values and the corresponding certification proofs for verification. For the -th () distribution, the sender does the following steps:(i)Compute or retrieve the link values of hash chain and (ii)Compute and (iii)Construct and publish the certification frame , while in the -th distribution, publish the seed and , as shown in (II) of Figure 1

4.2.3. Verification

For the -th () verification, the receiver does the following steps, as shown in (III) and (IV) of Figure 1:(1)If , the receiver receives the certification frame from the sender:(i)Compute and verify whether is equal to , where is a link value sent and saved in the last valid session(ii)Compute and verify whether is equal to If all checks are passed, the receiver verifies the sender successfully and then stores . The receiver also should store in the buffer for the next verification.(2)If , the receiver drops and and saves . Then, it will wait for the next valid certification frame , where .(i)Compute and verify whether is equal to , where is a link value sent and saved in the last valid session.(ii)Compute and verify whether is equal to .If all checks are passed, the receiver verifies the sender successfully and then stores and for the same reason.

4.2.4. Recombination

After all hash values have been published, the whole hash chain has been exhausted, and the receiver has stored the seed and or fewer s:(i)Compute and check whether and share the same sequence value and the same secret shadow , where , , and (ii)After verifications for all KACs, the receiver obtains distinct sequence values and distinct secret shadows , and it also gets the ordering relation of permutation of all the sequence values

Then, the receiver recovers in two ways:(i)Ordered verification: for all the sequence values , the union of their subscripts is the commitment of the second chain, denoted by . That is, .(ii)CRT verification: given () distinct pairs of , using CRT, the unique solution modulo of the set of equations is the commitment of the second chain, denoted by .

If , then the recombination is successful and we can obtain the new hash chain .

4.2.5. Self-Renewal

After recombination, the next chain starts to work and another new chain is also generated including a pair of instances and and the corresponding commitment . Iterations of the above processes have a result that hash chains work continuously and infinitely.

5. -Left-Counting-Bloom-Filter-Based Authentication Scheme

In the combination of WSNs and edge computing, the base station is usually an edge server or is deployed as an unmanned aerial vehicle along with the sensor nodes, which requires the consideration of the computing and storage capabilities of the base station. These new sensor nodes need to be authenticated to make sure that they are not malicious nodes before joining the network. In the edge computing scenario, the storage space of the base station should be mainly used to cache and process the sensed data to provide data service for IoT applications. However, due to the participation of nodes, the base station needs to maintain a vast lookup table, which will occupy more storage space for authenticating the sensor nodes.

In the AdlCBF scheme, we aim to solve two problems: one is the reduction of storage overhead in the base station, and the other is the nodes’ authentication and the parameters distribution of broadcast authentication when a new node joins the network. ECDSA is used to generate a signature with authentication information to be signed and the private key of the source . In order to reduce storage space, we introduce the -left counting Bloom filter to construct a dlCBF to store the fingerprint for each node instead of allocating storage space directly.

5.1. Construction of dlCBF

In DH-TESLA, every sensor node holds a public key and a private key allocated by the base station . The base station constructs a dlCBF to store the fingerprint of ID and key information instead of storing them directly. There are three steps to construct a dlCBF, as shown in Figure 2.Step 1. The base station applies a hash function to map each node’s pair of ID and public key to the true fingerprint . Each true fingerprint consists of two parts. The first part represents the bucket index which corresponds to the storage location, while the second part is the real practicable element to be stored in the corresponding array of .Step 2. The base station uses the additional pseudorandom permutations to expand locations, which are the replaceable choices for each true fingerprint . As shown in Figure 2, they are actually subarrays , where represents the tuple of storage location and practicable element corresponding to the -th choice of . There exist two kinds of collisions in dlCBF. One is that two different fingerprint replacements map to the same location of subarray; that is, ; however, , which means that the bucket of each subarray needs more storage cells for different elements. The other is that two true but different fingerprints have the same replacement; that is, ; however, . In this case, a counter is needed to record the number of the same elements stored. So, the storage cost of bucket is the sum of all counting of its storage cells.Step 3. The base station selects the leftmost one simultaneously with the minimum storage cost as the final storage location from the choices.

5.2. Authentication Process of Fresh Nodes

When a fresh sensor node applies to join the network, it should be authenticated and only the legal one can acquire the key chain commitment and other configuration information. Because the participation of new nodes always occurs during the whole network lifetime, the authenticating process appears before and after the network initialization phase. In other words, there are two kinds of fresh nodes: one can communicate with the base station directly and the other one is not close to it. Thus, two different cases will be discussed, respectively. In the former case, the base station authenticates the fresh node directly by Algorithm 1. Meanwhile, in the latter case, the fresh node will be authenticated through cluster head by Algorithm 2.

Both Algorithms 1 and 2 only describe the authentication process of single fresh node. When multiple fresh sensor nodes participate in the network synchronously, the cluster heads will aggregate the authentication application messages to generate only one revised authentication message and then send it to the base station that can batch-process them. Thus, these two algorithms can also be used in the multiauthentication cases.

6. DH-TESLA Protocol

Based on the above schemes, SRHC-TD and AdlCBF, there are five phases to describe the proposed protocol: setup, bootstrap, broadcast, authentication, and a new chain generation. We explain how the base station broadcasts message at the beginning of the network.

6.1. Setup

The base station first generates a sequence of secret messages using SRHC-TD scheme, which is described in Section 4.

6.2. Bootstrap

Any receiver in the network should have the commitment and relative parameters of the reinitializable one-way hash chain acquired by Algorithms 1 and 2 in Section 5, including the length of hash chain , a secure hash function , and the partition pair . Besides, the legal node also gains other initial parameters, such as the current time , the maximum clock difference between the sender and the receiver, the start time of interval , and the duration of each time interval .

6.3. Broadcast

The lifetime of a sensor node, which is much longer than that of one-way hash chain, is divided into fixed intervals of duration . The sender uses the key value of hash chain to compute the message authentication code (MAC) of message packets in the current time interval. Then, the sender broadcasts the packet with MAC in the same time interval and discloses a key value of hash chain with corresponding key authentication code (KAC) after a certain delay .

6.4. Authentication

When a node receives a message packet with the MAC, it stores the packet and the MAC in the buffer. Once the node receives a key disclosure packet with KAC, the sensor node first verifies the key value with KAC, which is related to the message packet that has been stored, as described in Section 4.

6.5. Generate a New Chain

When the hash chain runs out, the base station should generate the next chain and the node needs to recover the commitment of the new chain.

7. Security Analysis

7.1. Resistance with Chosen Plaintext Attack

Except the initial seed and the commitment, each key of a hash chain is not only the ciphertext (as the output) but also the plaintext (as the input) of the hash function. Thus, each value shares the same length, which reduces the key space as well as the cracked time. Thus, it is easy to suffer from the chosen plaintext attack. In our scheme, the KAC consists of two parts, which can be used to check whether the shared secret has been changed. All are different because of the left shift operation, which can prevent the chosen plaintext attacks effectively.

7.2. Fault Tolerance

The length of key chain is an integral multiple of , so all can be transmitted cyclically when the network is running. Thus, it can avoid the situation where one missing secret shadow would make the secret unsolvable. Before a hash chain is running out, each receiver stores at most, while only distinct sequence values and secret shadows are needed to recover the secret . In other words, the number of faults or missing message packets should be less than . The highest packet loss rate that our solution can tolerate is .

Assuming that the probability that a sensor node cannot receive a key disclosure packet is and which means that the same secret shadow will be disclosed times, the probability that a certain secret shadow is not received by the sensor node is reduced to . Suppose that the sensor node has a half chance to receive the packet (it will be lower in practice); that is, and . The probability that the sensor node loses is less than . In our scheme, in order to recover the commitment of next hash chain, only secret shadows are needed. Thus, the probability that the sensor nodes receives different secret shadows is . Then, the probability that the sensor node cannot obtain the commitment of next hash chain is

If the length of hash chain is 321, which can cover about 5 minutes with time interval of 1 second, and ; the probability that the sensor node cannot recover the commitment of next hash chain when the hash chain runs out is .

7.3. DoS Attack Tolerance

Both the base station and the sensor node are vulnerable to DoS attack that is hard to prevent. The attacker may forge the message, such as key distribution message and authentication message, to confuse both the sensor node and the base station.

In Multilevel TESLA, when the lower level chain draws to an end, the upper level is used to authenticate the commitment for the next lower level chain by commitment distribution message (CDM),in its -th interval, where denotes concatenation, is the commitment of the lower level chain, and or is a key of the upper level chain. The attacker can fabricate a CDM by replacing or and the sensor node can easily verify with which is released in . Nevertheless, the sensor node has no way to authenticate other parameters in because the key will be released later. In order to solve this problem, Multilevel TESLA provides two variants, needing more buffers (hundreds of bytes) to store multiple CDMs in sensor node or requiring more storage space for additional precomputed chains in the base station and larger payload of CDMs. Meanwhile, in our protocol, the commitment of next chain is released with the distribution of keys. When a sensor node received a certification frame , it could be authenticated immediately by a single XOR operation. The sensor node can distinguish the validity of the message at a very fast speed without a little buffer (about 8 bytes). With the same condition in Section 7.2, assume that the bandwidth is 10 kbps and the hash function is 64 bits; the relative communication overhead is just .

On the other hand, a fresh sensor node needs to be authenticated before joining the network. An attacker may launch the flood attack toward the base station with forged messages containing valid public keys, which can be obtained by eavesdropping. However, the attacker cannot alter the ID information of or . Even if the ID can be forged, after receiving the same authentication message multiple times, the base station will find that it is being attacked. Meanwhile, a sensor node will be aware that an attack is going on and alert the base station when it fails to receive the authentication response message or receive the wrong responses multiple times in one round. Also, when a cluster head receives the authentication message , the authentication response message , or added authentication response message multiple times from the same node, it will be aware that an attack is going on and alert the base station.

7.4. Provable Security

Because SRHC-TD is formed by many chains, we will discuss the provable security of SRHC-TD with two cases: one is the successive key in the same hash chain; and the other is the head-tail key connecting the current hash chain and the next one.

We choose the Random Oracle model to analyse the provable security of SRHC-TD. In the Random Oracle model, attackers have the polynomial calculation ability of any secret parameter . The reliability of all algorithms of SRHC-TD is determined by the secret parameter . Specifically, the probability of cracking hash functions is the reciprocal regarding the exponential function of , the probability of cracking CRT is the reciprocal regarding the power function of , and the probability of cracking partition algorithm is the reciprocal regarding the logarithmic function of :(a)After the th time interval, we suppose that the receivers (including the attackers) have received . Attackers must forge and within the th time interval and make them meet the equations and . Suppose that the calculation ability of attackers can be represented as the polynomial and they can query Oracle any number of times within each time interval. If the probability of cracking hash operation is , the probability of breaking SRHC-TD can be expressed aswhere can be expressed as the general form of polynomial. So, we can deduce . Evaluating the limit value of , we can deduce(b)At the ends of the -th time interval, only has not been published in the first hash chain. The receivers (including the attackers) have obtained number of . Attackers must forge and within the th time interval and make them meet the following conditions: (1) ; (2) when , and contain the same sequence value and the same shadow , where , ; (3) the solution derived from the proof method of ordering is the same as the solution derived from the proof method of CRT. The fake value must meet all the three conditions. We suppose that the calculation ability of attackers can be represented as the polynomial and they can query Oracle any number of times within each time interval. If the probability of cracking hash operation is , the probability of cracking CRT is , and the probability of cracking partition algorithm is ; then the probability of breaking SRHC-TD can be expressed aswhere can be expressed as the general form of polynomial . Evaluating the limit value of , we can deduce

Considering the above two cases, there exists a positive integer , when ; for any , there must be . So, the successful probability of attackers can be ignored and the SRHC-TD scheme has provable security.

8. Performance Evaluation

8.1. Evaluation of SRHC-TD

There are two possible ways to store a hash chain in base station. One is to generate a chain and allocate specialized space to store the whole chain. The other one is to store the seed of a chain and compute the link value when it needs to be used. Obviously, the former reduces the computation consumption but increases the memory consumption, while the latter is on the contrary.

Considering the above two situations, the consumptions of SRHC-TD will be compared with those of RHCs [2830] from two aspects, computation and communication, as follows. Table 2 shows the variable symbols and the corresponding variable names. Table 3 shows the consumptions of SRHC-TD and RHCs in detail.


SymbolsDescription

The output length of hash function
The length of a hash chain
The length of Mignotte’s sequence
The computational overhead of a hash function
The computational overhead of generating a random number
The computational overhead of mapping a random number to one single bit using Hard Core Predicate
The computational overhead of generating a secret shadow in SRHC-TD
The computational overhead of computing in SRHC-TD
The computational overhead of a left shift operation
The computational overhead of an XOR operation
The computational overhead of computing solutions to equations


Storage methodThe whole chainThe seed

RHC [28]InitializationComputation
Communication
Publication and verification and recombinationComputation
Communication

SUHC [29]InitializationComputation
Communication
Publication and verification and recombinationComputation
Communication

SRHC [30]InitializationComputation
Communication
Publication and verification and recombinationComputation
Communication

SRHC-TDInitializationComputation
Communication
Publication and verification and recombinationComputation
Communication

For ease of analysis, we assume that the length of the hash chain is greater than the output length of the hash function; that is . In fact, the length of the hash chain is usually around 1000 because a short chain will lead to frequent regeneration or failure due to excessive packet loss rate. It is remarkable that if the base station only stores seeds, the calculation time increases exponentially. The length of hash chains is a significant factor affecting the computation time.

From Table 3, we can see that the storage method of the hash chain has no effect on the communication overhead. In the initialization phase, the above schemes have the same communication overhead, while in the publication, verification, and recombination phases, our scheme has the same overhead as SRHC and is lower than RHC and SUHC, as shown in Figure 3. To evaluate the performance of our scheme in terms of computation overhead, we implement the operation used in the proposed scheme and other schemes on an Ubuntu 12.04 virtual machine with an Intel Core i5-4300 CPU @ 2.60 GHz. We use the 64-bit version of RC5 and generate a Mignotte’s sequence whose length is 32. In this case, each hash operation takes 0.0037 ms. Besides, the times consumed by shifting, XOR, and modular operation, denoted as , , and , are ms, ms, and ms, respectively. The generation of a 32-bit random number and takes  ms, and it takes 0.0161 ms to recover the commitment. The time consumed in computing is so little that it can be ignored.

Figure 4 shows that, in the initialization phase, the time taken by our scheme increases by approximately 0.26 ms compared to the other three schemes, due to the generation of sharing secrets. However, in publication, verification, and recombination phases, our scheme takes less time, especially when the base station stores the entire hash chain, as shown in Figure 5. If the hash chain contains 1000 keys and the base station stores the entire hash chain, compared to [2830], the computation overhead of SRHC-TD is reduced by 39.07%, 77.08%, and 63.28%, respectively. If the base station only stores the seed of the hash chain, our scheme’s computation overhead is reduced from 0.2% to 0.81%. It is worth noting that when the base station stores the entire chain, the computation time is 1000 times that of the case when it stores the seed. The main reason for this result is that when the base station releases the key which the sensor node used to authenticate the broadcast message, it needs to recover the key first. The base station needs to hash the seed value many times when the base station only stores the seed. However, if the base station stores the entire hash chain, which takes up more storage space, the key can be found easily. Thus, a more efficient way is that the base station stores some “checkpoints” for recovering the key. The greater the number of checkpoints stored by the base station is, the closer the time consumption is, according to Figure 5(a).

Thus, it is easy to compare the consumptions of SRHC-TD with the RHCs, as shown in Table 4. It shows that the SRHC-TD has a better performance in publication, verification, and recombination phase. The computation and communication overhead of SRHC-TD are much less than other schemes, especially when the node only stores the seed of hash chain.


Storage methodThe whole chainThe seed

InitializationComputationSRHC < SUHC < RHC < SRHC-TDSRHC < SUHC < RHC < SRHC-TD
CommunicationSRHC-TD = RHC = SRHC = SUHCSRHC-TD = RHC = SRHC = SUHC

Publication and verification and recombinationComputationSRHC-TD < RHC < SRHC < SUHCSRHC-TD < RHC < SRHC < SUHC
CommunicationSRHC-TD = SRHC < RHC = SUHCSRHC-TD = SRHC < RHC = SUHC

As we know, the largest contributor to energy consumption in WSNs is communication complexity, and computation overhead is the second, where the energy cost of sending and receiving a packet is several times that of computing or processing a packet [41]. The publication, verification, and recombination phases last much longer than the initialization phase over the lifetime of the network, and the initialization phase runs in the base station. Although our scheme has a higher consumption overhead in initialization phase, our energy consumption is lower than other schemes on the whole.

8.2. Evaluation of AdlCBF

In AdlCBF scheme, the fresh node can be authenticated by the base station using the ECDSA signature and the fingerprint of the node. ECDSA is a light-weight algorithm and it is based on the elliptic curve. The sensor node (or cluster head node) only needs to generate and verify a signature once when it joins the network. Thus, the overhead of authenticating a fresh sensor node is acceptable. Besides, the d-left counting Bloom filter is adopted to reduce the storage space. We will compare AdlCBF with Direct Access and CBF, respectively, regarding the storage overhead.

8.2.1. Comparison with Direct Access

The basic simulation configurations of constructing a dlCBF are shown in Table 5, where stands for a large prime number.


ParametersValue

Elements number
Subsequences number4
Bucket number per subsequence
Average load per bucket6
Cells number per bucket8
Counter length2 bits
Storage element length = 14 bits
Hash functionMD5
Pseudorandom permutation
Proportional parameter in permutation A random odd number below

Suppose that the length of public key in ECDSA is ; then, , where is the maximum on the elliptic curve. To guarantee the uniqueness of each node’s public key, let . Thus, we can obtain , and the minimum length of public key is 19 bits. In the same way, the minimum length of ID is 19 bits, too. So, if the 500000 elements are stored with the form of plaintext, the required memory space is bits. When the 500000 elements are stored in AdlCBF, the required memory space is bits. Thus, the former one is 1.78125 times more than the latter one. In conclusion, AdlCBF has better storage efficiency than the Direct Access.

On the constructed AdlCBF, we repeat operations of adding an element, querying an element, and deleting an element, respectively, 3000 times. We record the consumed time for each operation and calculate the average value of the recorded time for every 300 times. Thus, one set of tests is accomplished and then we keep on conducting the remaining 9 tests similarly. The result is shown in Table 6. The more intuitive representation of changes in consumption time is shown in Figure 6.


AdditionQueryDeletion

Test 10.230.080.32
Test 20.210.070.35
Test 30.310.130.40
Test 40.250.120.42
Test 50.200.100.35
Test 60.270.100.39
Test 70.300.090.30
Test 80.240.080.38
Test 90.210.070.35
Test 100.230.090.36
Average time0.250.100.36

As shown in Table 6, the query time is actually the authenticating time in AdlCBF, whose changes are quite small, relative to addition and deletion. In AdlCBF, the main operations are done by CPU, while in the Direct Access, the main operations are done by RAM. As we know, the read-write speed of CPU is two or three orders of magnitude faster than RAM. According to the space and the query time of AdlCBF, we can obtain that the required time in the Direct Access is in the range of 1.78 ms to 17.81 ms. In conclusion, AdlCBF has more satisfactory query efficiency.

8.2.2. Comparison with CBF

The upper bound on false positive probability of AdlCBF is , while the false positive probability of CBF is , where the standard CBF uses counters for tracking elements and each counter is 4 bits [42]. When , the two approaches have the same amount of space. However, when considering the false positive probability ratio of CBF to AdlCBF, we obtain <