Abstract

The problem of developing authentication protocols dedicated to a specific scenario where an entity with limited computational capabilities should prove the identity to a computationally powerful Verifier is addressed. An authentication protocol suitable for the considered scenario which jointly employs the learning parity with noise (LPN) problem and a paradigm of random selection is proposed. It is shown that the proposed protocol is secure against active attacking scenarios and so called GRS man-in-the-middle (MIM) attacking scenarios. In comparison with the related previously reported authentication protocols the proposed one provides reduction of the implementation complexity and at least the same level of the cryptographic security.

1. Introduction

Expansion of Internet of Things (IoT) and Machine-to-Machine (M2M) communications has implied additional challenges regarding the information security issues. In a number of scenarios at least one of the entities involved is a tiny device with very limited computational capabilities and heavy restriction regarding power consumption. Accordingly, a challenge is developing of the information security techniques which minimize the computational and power consumption overheads implied by the security requirements.

Authentication of one entity, called Prover, to another, called Verifier, has been well recognized as one of the cornerstones for achieving the desired level of information security (as well as cybersecurity). Authentication protocols for restricted implementation scenarios have been considered in a number of papers including [13]. This is also in line with a discussion recently reported in [4].

This paper considers an authentication approach suitable for scenarios where an entity with highly constrained computational capabilities should in a secure way perform authentication to the verification party with high performance computational capabilities.

The reported protocols appear as not enough suitable because either (i) they are not enough lightweight for a tiny party of an authentication protocol and do not take into account the asymmetrical implementation constraints (ii) or/and they do not provide the desired level of cryptographic security.

Consequently, in this paper, we jointly employ certain elements of the reported protocols to achieve our main goal: development of the authentication protocols with asymmetric implementation complexity at Prover and Verifier sides which provides desired provable level of cryptographic security.

2. Background

2.1. Family of HB Authentication Protocols

The origin of a family of authentication protocols based on hardness of learning parity with noise (LPN) problem is a lightweight two-pass authentication protocol called the HB protocol reported in [5]. Simplicity of the approach and its provable security implied by the fact that the LPN problem is NP-complete (see [6]) have attracted much interest. The HB protocol requires only basic AND and XOR operations and it has been proved to be secure against passive attacks via reduction to the LPN problem. However, it is insecure against a stronger adversary, active adversary, who has ability to impersonate a reader and interact with legitimate tags. In order to address this weakness, a modified HB protocol called the HB+ protocol has been reported in [79]. The HB+ protocol has been proved to be secure against active attacks, but it has been shown in [10] that the HB+ protocol is insecure against a man-in-the-middle (MIM) attack. Specifically, in [10], a linear time MIM attack against the HB+ protocol that is called the GRS-MIM attack has been reported. Later on a number of variants of the HB+ protocol have been proposed to prevent the GRS-MIM attack (including [11, 12]), but all of them were shown to be insecure later. After a number of unsuccessful attempts, [13] has extended the HB+ protocol and proposed a new protocol called the HB# that only requires three-pass communication and is secure against GRS-MIM attacks. While two vectors are shared by the Prover and the Verifier as the secret keys in the HB+ protocol, two matrices are shared by the parties as the secret keys in the HB# protocol: by increasing the size of secret keys, the HB# protocol achieves stronger security and reduces the communication complexity. However, [14] has described an MIM attack against the HB# protocol. After that, several three-pass protocols that resist MIM attacks were proposed. Three-pass authentication protocols which have stronger security had been well studied. From a practical aspect, however, two-pass authentication protocol is more desirable than three-pass authentication protocol. Construction of a two-pass authentication scheme with even the active security had been open problem for a long time. In [15] a two-pass authentication protocol called the AUTH protocol has been proposed. The AUTH protocol is the first two-pass protocol which achieves the active security and yields a large improvement in terms of round complexity. Also [15] has reported two variants of the AUTH protocol, which could be called the AUTH+ protocol and the AUTH# protocol. In the AUTH+ protocol, the computational complexity decreases in exchange for increasing the number of secret keys. In the AUTH# protocol, the communication complexity decreases in exchange for the increasing the size of secret keys like the HB# protocol. Later on in [16] the active security of the AUTH# protocol has been proved employing a modular approach, which simplifies the proof: for this proof, a new computational assumption has been introduced and called the MSLPN assumption.

2.2. HB# Authentication Protocol

HB# authentication protocol is a three-round challenge-response protocol which has been proposed and analyzed in [13]. Random-HB# is a generalization of HB+ where the form of the secrets and has been changed from -bit vectors into ()- and ()-binary matrices and . Random-HB# protocol is displayed as follows.

Parameters:

.

For an additional explanation of the notations, please refer to Section 3.2.

While Random-HB# has a number of similarities with the HB+ protocol, there are important differences as well. In particular, the final verification by the reader consists of the comparison of two -bit vectors and .

2.3. Authentication Employing Random Selection

Design and security evaluation of authentication protocols based on random selection paradigm have been initially reported in [1719]. The principle of random selection can be described as follows. Suppose that the Verifier Alice and the Prover Bob run a challenge-response authentication protocol which uses a lightweight symmetric encryption operation of block length , where denotes an appropriate key space. Suppose further that is weak in the sense that a passive adversary can efficiently compute the secret key from samples of the form . This is obviously the case if is linear.

Random selection denotes a method for compensating the weakness of by using the following mode of operation. Instead of holding a single , Alice and Bob share a collection of keys from as their common secret information, where is a small constant.(i)Upon receiving a challenge from Alice, Bob chooses a random index and outputs the response .(ii)The verification of with respect to can be efficiently done by computing for all .

2.4. Security Evaluation of an Authentication Protocol

The common scenarios for security evaluation against impersonation attacks are as follows. The basic one is a passive attack scenario which proceeds in two phases: in the first phase the adversary eavesdrops a (large) number of interactions between and and then attempts to cause to accept the authentication response in the second phase (where is no longer available). In an active attack, the adversary is additionally allowed to interact with in the first phase. The strongest and most realistic attack model is a man-in-the-middle attack (MIM), where the adversary can arbitrarily interact with and (with polynomially many concurrent executions allowed) in the first phase.

3. Proposal of a Dedicated Authentication Technique

This section proposes an authentication protocol with asymmetric implementation complexity which is suitable for authentication of a Prover with low computational capabilities to a Verifier with high performance computational capabilities.

3.1. Underlying Ideas for Design

Taking into account results on the authentication protocols reported in [13, 1720], this paper proposes a novel authentication protocol which is based on a nontrivial hybridization and upgrading of certain previously reported results.

The initial observations regarding certain previously reported protocols are the following ones:

(i) The protocols reported in [13, 20] provide a number of interesting framework elements for developing highly secure authentication protocols, but they appear as not light enough for a number of M2M authentication scenarios and do not take into account asymmetric implementation constraints. It is desirable to reduce implementation complexity of certain authentication protocols with implementation potential in tiny Provers, like HB# authentication protocol [13].

(ii) The protocols reported in [1719] employ an interesting paradigm of random selection but do not provide the desired level of cryptographic security.

Accordingly, the underlying ideas for developing the novel authentication protocols were the following ones:(i)Employ framework elements of HB# authentication protocol and modify it in order to fit into implementation restrictions at a tiny Prover and asymmetric implementation and execution capabilities of Prover and Verifier sides.(ii)Do not employ elements of the reported protocols which do not support lightweightness of the authentication at the party (usually the Prover) with tiny capabilities.(iii)Employ the power of random selection approach to enhance cryptographic security of the protocol at the tiny party as a trade-off between the cryptographic security of the protocol and its increased implementation complexity at the more powerful party (usually the Verifier).

Particularly, note the following:(i)Instead of employment of two secret keys and as in the source HB# protocol, we propose employment of one secret key and the random selection paradigm for achieving the same security goals.

3.2. Notations

We use the following notations:(i) and denote, respectively, set of all -dimensional binary vectors and set of binary matrices .(ii)We use normal, bold, and capital bold letters such as , , and to denote single elements, vectors, and matrices.(iii)For a vector denotes the th element of .(iv) is the bitwise XOR operation of two vectors and ; that is, , for all . Similarly, is defined as bitwise XOR of two binary matrices and .(v) denotes the Hamming weight of binary vector , which is the number of its nonzero elements .(vi) is the operation of sampling a value from the uniform distribution on the finite set .(vii) represents the Bernoulli distribution with parameter , and means that for a bit , , and .(viii) means that the vector was randomly chosen among all the vectors of length , such that and , for .(ix)Let be a vector of length and weight . A circulant matrix over the vector is a matrix with columns, whose first column is , and each next column is produced by a rotation of the previous column one position downwards.The elements of the set of circulant matrices with columns, generated over vectors of length and weight , can be ordered in the array .(x) ()-binary Toeplitz matrix is a matrix where for each diagonal from upper-left to lower-right all the elements on the diagonal have the same value. Note that the entire matrix is specified by the top row and the first column, so it can be parametrized by bits. Note that circulant matrices are also a kind of Toeplitz matrices. Toeplitz matrices can be generated efficiently and have good statistical properties [13].

An algorithm is probabilistic if it makes random choices during its execution. A probabilistic algorithm is probabilistic polynomial-time (PPT) if for any input the computation of algorithm terminates in at most polynomial number of steps in the length of input. We also use the term efficient algorithm as a synonym for PPT algorithm. A function is negligible if for every positive polynomial , with being large enough, it holds that .

3.3. Proposal of the Authentication Protocol

The authentication protocol NHB# (Nondeterministic HB#) is displayed as follows:

Parameters:

  
,
.

This authentication protocol NHB# (Nondeterministic HB#) is an interactive protocol being executed between Prover and Verifier which are efficient algorithms. They share one secret key, a matrix , which is produced by the key-generation algorithm , for the security parameter . As a result of the protocol execution, Verifier outputs indicator of acceptance value , such that if confirms that the Prover is valid; otherwise the indicator value is set to .

Public Parameters(i): dimensions of the secret matrix , which depend on the security parameter ;(ii): parameter of the Bernoulli distribution ();(iii): the acceptance threshold, such that .

Key Generation. Algorithm samples a random matrix as a secret key and returns it to and .

Protocol SpecificationPhase I: the protocol initialization, executed by ; ( sends message to ).Phase II: challenge generation, executed by ; ( sends message to ).Phase III: response generation, executed by ;;; ( sends to ).Phase IV: the verification process, executed by ,  ;while doif then ; outputs value.

Let us discuss the error rates of the protocol. The false rejection happens when a legitimate Prover gets rejected by the Verifier, that is, when the weight of vector generated in the response phase is greater than value. Therefore, the probability of this event (the completeness error) is

The false acceptance occurs when an illegitimate Prover sends a randomly chosen response to the Verifier and gets authenticated, which happens with the following probability (the soundness error):taking into account the number of binary vectors of length whose weight is at most and different acceptance subcriteria of the Verifier.

Protocol Storage Optimization. Following the proposal in [13], the storage cost of the secret key may be reduced to bits, by using a Toeplitz matrix as instead of the random matrix. This type of storage reduction was applied in [13] to the Random-HB# protocol, resulting in the HB# protocol, which uses two Toeplitz matrices and as secret keys. The security of HB# is based on the conjecture about hardness of the so-called Toeplitz-MHB Puzzle. In our case, the security of this optimized protocol version is based on the analogously plausible conjecture about hardness of the Toeplitz-MLPN problem (see Note 1, Section 5).

4. The Security Evaluation Framework

We consider two types of attack scenarios for our protocol: active and GRS-MIM scenarios. Both of them consist of two phases: the so-called learning and forgery phases. In the first phase, the adversary interacts with the Prover and/or Verifier, learning the information she needs in order to be successful in the second phase, where she interacts with the verifier trying to make him output .

Definition 1. The active attack is being executed in two phases.

Phase 1. The adversary interacts only with the honest Prover for a polynomial number of times .

Phase 2. The adversary interacts with the Verifier trying to impersonate the Prover.

Definition 2. GRS-MIM attack is being executed in two phases.

Phase 1. The adversary interferes in executions of the protocol. The adversary can eavesdrop or modify all messages between the honest Prover and honest Verifier and also gets the Verifier’s decision, on each execution of protocol.

Phase 2. The adversary interacts with the Verifier trying to impersonate the Prover.

Let denote a complete execution of NHB# protocol between a party and the Verifier and say that takes value 1 if the execution ends with Verifier’s acceptance (i.e., ) and takes value 0 otherwise ().

Then we define the advantage of an active adversary asand the advantage of a GRS-MIM adversary as

If this advantage is nonnegligible, we say that the adversary is successful in the given attack scenario against the protocol. The protocol is secure against active attacks if, for all efficient active adversaries , the advantage is negligible. Similarly, the protocol is secure against MIM attacks if, for all efficient MIM adversaries , the advantage is negligible.

5. Security Evaluation in the Active Attacking Scenario

LPN Problem. Let be a secret key, and . We denote by the probability distribution over whose samples are pairs , where , . Let also denote the oracle taking a sample from distribution . is the oracle taking samples from the uniform distribution over . problem consists of distinguishing the access to the oracle from access to the oracle .

The advantage of a distinguisher is defined as

Definition 3 (LPN problem). problem is -hard if for every distinguisher running in time and making queries, the advantage is less than . In asymptotic terms, is hard if for every efficient , the advantage is negligible.

Matrix LPN (MLPN) Problem. The matrix LPN problem is defined analogously to the LPN problem, with the difference that the secret key is now a matrix, not a vector.

Let be a secret key, and . We denote by the probability distribution with samples , where and . Let also denote the oracle producing samples from . is the oracle taking samples from the uniform distribution over . problem, that is, the matrix variant of problem, is to distinguish the access to the oracle from access to the oracle .

For a distinguisher , we define advantage of as

Definition 4 (MLPN problem). problem is -hard if, for all distinguishers running in time and making queries, the advantage is less than . In asymptotic terms, is hard if all efficient distinguishers achieve a negligible advantage.

Theorem 5 (equivalence to LPN). If problem is -hard, then problem is -hard; .

Proof. We slightly adapt the proof of Proposition in [16]. As usual, we will assume that there exists a distinguisher with -advantage and use it as a subroutine to construct a distinguisher using as a subroutine and prove that the corresponding -advantage is equal to , which contradicts the hardness assumption of .
Let be a matrix with columns ,  and . We define the probability distribution over whose samples are pairs , where , for , (), and    ().
For , we denote by the probability that distinguisher outputs 1, when its input is a sample :Note that is the same as a sample from , and is the same as a sample. ThereforeThe distinguisher will forward the samples for as input to the distinguisher . Each sample will contain a sample from the unknown oracle that communicates with ( or ). Then, will produce the same output as . Now follows the precise description of actions taken by algorithm:(1)Take a sample from an unknown oracle .(2)Choose , , ().(3)Make a sample , whereand forward as input to distinguisher .(4)Output the same output value (0 or 1) returned by .Note that if is the oracle , then produced in Step (3) is a sample . If is the oracle , then is a sample .
In particular, advantage of distinguisher can be computed as follows:Thus, distinguisher achieves advantage greater or equal to , which contradicts that is -hard.

Note 1. Similarly as in Conjecture [13], the hardness of the Toeplitz variant of problem can be conjectured, where the Toeplitz matrix is used as the secret key instead of a random matrix.

Theorem 6. Let , , and , where is a constant satisfying . If problem is hard, then the protocol is safe against active attacks.

Proof. The proof consists of the following four parts: (i) specification of a contradiction scenario which is a framework for security evaluation where an active adversary exists; (ii) design of a distinguishing algorithm which provides learning of for solving a hard problem; (iii) procedure for distinguishing between two oracles; and (iv) estimation of the success rate of in the distinguishing phase.
(i) We assume the opposite from Theorem 6 statement; that is, the protocol is not actively secure and an active adversary exists achieving a nonnegligible advantage, but that will contradict the hardness assumption of problem. The addressed scenario is formally specified in the following claim.
Claim. Suppose that there is an active adversary interacting with Prover in at most executions of protocol, running in time and achieving advantage . Then there exists a PPT algorithm , running in time and making oracle queries, such that (ii) The learning phase: at first, initializes the learning phase of the adversary , while simulating the honest Prover of protocol NHB#:(1) takes the sample from oracle.(2) as sends to adversary .(3)The active adversary forwards a challenge to . is taking the following actions:(4)chooses a vector , . Then it makes and sets the value ;(5)as sends the value to active adversary .The previous steps are repeated for times.
(iii) The oracle distinguishing phase: is taking the following actions:(1)initiates a communication with adversary (after its learning phase) which sends a blinding message ;(2)chooses two different random vectors ;(3)sends the challenge to adversary and receives a response in return;(4)rewinds (behind Step (1)), sends another challenge , and receives the answer ;(5)receives the answer from ;(6)for makes valuesand sends as output if for some . Otherwise, .(iv) The success rate of in the distinguishing phase: we analyze the success rate of the algorithm in distinguishing the distribution used by the oracle .
(1) If uses the uniform distribution, then , , and are uniformly random. outputs if and produced by the adversary satisfy the condition for some . Since did not learn correctly, we can assume that and are random, so the event has a nonnegligible probability .
(2) Suppose that had access to oracle . Then , so . Thus, did simulate the correctly in the learning phase of , so the adversary authenticates to protocol with the nonnegligible probability . That means that is the probability that for the answers of the adversary produced in Steps (3) and (4) of the oracle distinguishing phase; it holds that , and for some . Therefore, by the triangle inequality in the Hamming metrics, we get that , so outputs .
Thus, depending on whether was interacting with the uniform or the oracle, we estimate the difference in probabilities of producing as output:Therefore, the distinguisher is achieving a nonnegligible advantage, which contradicts the hardness assumption of problem.

6. Security Evaluation in the Restricted Man-in-the-Middle Attacking Scenario

We prove the GRS-MIM security following the technique used in [13].

Lemma 7 (see [13]). If is a random matrix, an integer in the interval , and the binary entropy function , then for it holds that

The previous lemma also holds if is a random Toeplitz matrix (Appendix , [13]).

Theorem 8. Suppose that there exists an efficient GRS-MIM adversary attacking protocol by modifying at most executions of protocol between the Prover and the Verifier, running in time and achieving advantage at least . Then, under an easily met condition on the parameter set, there is an active adversary attacking interacting at most times with honest Prover, running in time and achieving a nonnegligible advantage.

Proof. The proof consists of the following two parts: (i) specification of the learning phase of an MIM adversary and (ii) evaluation of the advantage which MIM adversary can achieve after the learning phase.
The Learning Phase of MIM Adversary. In order to provide a valid learning phase for the adversary , the adversary takes the roles of simulated honest Prover and honest Verifier, denoted by and .The honest Prover sends a blinding vector to , and playing as forwards to . playing as sends a random vector as a challenge to . modifies to and sends to , and forwards to the honest . returns to . as forwards to .If is the all-zero vector, sets ; otherwise .The previous procedure is being repeated in iterations.The Advantage of MIM Adversary. We consider that the adversary has achieved successful learning if and were executed correctly in each iteration of the learning phase, that is, if they behaved like honest and .
Since forwards directly the responses of honest to the received queries, works correctly in each simulation step.
On the other hand, the behaviours of the honest and do not have to match in all circumstances.
This happens in two cases: when accepts the response which gets rejected by , or when rejects the response which gets accepted by .
In the first situation, since accepts the response, it means that it is the response of honest which is rejected by , so the probability of this event is equal to the completeness error of the protocol; that is, .
The second case, when in some iteration rejects the response which is accepted by , means that it is the response where is not all-zero, but still this response gets accepted by honest .
The probability of this acceptance is the probability that the following holds for some (where ):That is, .
Let us denote by the vector . Then in vector , bits follow the distribution , and the rest follow the distribution . Therefore = . Since is a linear function of , it holds that for each , where .
Therefore, according to Chernoff bound, event has the probability at most when , for each .
Suppose that .
Let denote the event . Therefore for each it holds Suppose that is some positive constant, and is the least integer such that   for . Then, for all , when , we have , so the first term in the upper bound is negligible.
In order that the second term also gets negligible, we choose such that is always positive; that is, (that condition is easily met for the usual parameter values [13]).
Therefore, we have that the probability of incorrect simulation in a single iteration isThus, the probability that all iterations are correct, that is, that the learning phase is successfully conducted, is , which has the asymptotic value of 1.
We conclude that the advantage of the active adversary attacking is , which is a nonnegligible value, so this contradicts the active security of that protocol.

7. A Concluding Discussion

This paper proposes an authentication protocol with asymmetric implementation complexity which is suitable for authentication of a Prover with low computational capabilities to a Verifier with high performance computational capabilities. The protocol is based on a trade-off between the execution overheads at Prover and Verifier: more computational efforts are required at the side of Verifier in order to maintain the desired level of the authentication security.

The proposed protocol originates from HB# protocol [13], but it provides reduction of the required secret key dimension to the half of the one required in HB# protocol. Reduction of the required secret key dimension and the asymmetric computational overheads at Prover and Verifier appear as a consequence of employment the random selection paradigm. Security of the proposed authentication protocol results from joint employment of the LPN problem and random selection paradigms. In this paper, security of the proposed authentication protocol has been proved in active attacking and restrictive MIM (so called GRS-MIM) attacking scenarios. We conjecture that protocol could achieve security in MIM attacking scenarios stronger than GRS-MIM, and this is one of the directions for the related future work.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The Ministry of Education, Science and Technological Development, Serbia, has partially funded this work.