Abstract

The rise of wireless applications based on RFID has brought up major concerns on privacy. Indeed nowadays, when such an application is deployed, informed customers yearn for guarantees that their privacy will not be threatened. One formal way to perform this task is to assess the privacy level of the RFID application with a model. However, if the chosen model does not reflect the assumptions and requirements of the analyzed application, it may misevaluate its privacy level. Therefore, selecting the most appropriate model among all the existing ones is not an easy task. This paper investigates the eight most well-known RFID privacy models and thoroughly examines their advantages and drawbacks in three steps. Firstly, five RFID authentication protocols are analyzed with these models. This discloses a main worry: although these protocols intuitively ensure different privacy levels, no model is able to accurately distinguish them. Secondly, these models are grouped according to their features (e.g., tag corruption ability). This classification reveals the most appropriate candidate model(s) to be used for a privacy analysis when one of these features is especially required. Furthermore, it points out that none of the models are comprehensive. Hence, some combinations of features may not match any model. Finally, the privacy properties of the eight models are compared in order to provide an overall view of their relations. This part highlights that no model globally outclasses the other ones. Considering the required properties of an application, the thorough study provided in this paper aims to assist system designers to choose the best suited model.

1. Introduction

Radio Frequency IDentification (RFID) is a technology that permits identifying and authenticating remote objects or persons without line of sight. In a simple manner, a tag (i.e., a transponder composed of a microcircuit and an antenna) is embedded into an object and interacts with a reader when it enters within its electromagnetic field. The first use of RFID goes back to the early 1940s, during World War II, when the Royal Air Force deployed the IFF (Identify Friend or Foe) system to identify the Allies airplanes. Today, RFID is more and more exploited in many domains such as library management, pet identification, antitheft cars, anticounterfeiting, ticketing in public transportation, access control, or even biometric passports. It thus covers a wide ranging of wireless technologies, from systems based on low-cost tags (such as EPCs [1]) to more evolved ones operating with contactless smartcards [2, 3].

As predictable, some problems come up with this large-scale deployment. One general assumption of RFID systems is that the messages exchanged between the tags and the readers can easily be eavesdropped by an adversary. This raises the problem of information disclosure when the data emitted by a tag reveal details about its holder (called “information leakage”), but also when the eavesdropping of communications allows tracking a tag at different places or times (called “malicious traceability”) and consequently its holder. Many articles pointed out the dangers of RFID with respect to privacy, and the authorities are now aware of this problem. For instance, Ontario Information and Privacy Commissioner Cavoukian aims to advocate the concept of “privacy-by-design” [4] which states that privacy should be put in place in every IT system before its widespread use. In 2009, the European Commissioner for Justice, Fundamental Rights and Citizenship issued a recommendation [5] which strongly supports the implementation of privacy in RFID-based applications.

Various researches have emerged these last years to fight against information leakage and malicious traceability in RFID. However, the search for a generic, efficient, and secure solution that can be implemented in reasonably costly tags remains open [68]. Solutions are usually designed empirically and analyzed with ad hoc methods that do not detect all their weaknesses. In parallel, many investigations have been conducted to formalize the privacy notion in RFID. In 2005, Avoine was the earliest researcher to present a privacy model [9]. Since then, many attempts [1022] have been carried out to propose a convenient and appropriate privacy model for RFID. But each one suffers from distinct shortcomings. In particular, most of these models generally do not take into account all the alternatives that a power may offer to an adversary. For instance, when an adversary is allowed to corrupt a tag, then several possibilities may arise: a corrupted tag could be either destroyed or not, and, in the last case, this tag could still be requested to interact within the system. At Asiacrypt 2007, Vaudenay introduced the most evolved RFID privacy model [22] known so far. However, this model is not as convenient as some protocol designers may expect, and they sometimes prefer to use a less comprehensive model to analyze a system. Consequently, providing an analysis and a comparison of the major RFID privacy models is meaningful to help designers in their choice. Such a work aims to highlight the strengths and weaknesses of each model. Su et al. already achieved a similar work in [23]. Unfortunately, they only focused on privacy notions and did not consider all the subtleties that are brought by different models. As a consequence, their study considers some models as weak, even though they offer interesting properties.

Our contribution is threefold. Firstly, in Sections 3 to 10, we chronologically present eight well-known models designed to analyze identification/authentication protocols preserving privacy. Some of them are very popular like [9, 16, 22]. Other ones have interesting frameworks like [12, 13, 18] (e.g., [18] is derived from the well-known universal composability framework). Other alternative models are attractive successors of [22], such as [11, 15]. Secondly, in Section 11, we analyze five different authentication protocols with each of these models in order to exhibit the lack of granularity of the state of the art. Finally, in Sections 12 and 13, we thoroughly compare the eight models regarding their different features and their privacy notions. We show that none of these models can fairly analyze and compare protocols. This fact is especially undeniable when the system’s assumptions (that can differ from one system to another) are taken into account for an analysis.

2. Common Definitions

In this section, we give all the common definitions that are used in the presented privacy models.

2.1. The RFID System

For all the privacy models, an RFID system is composed of three kinds of entities: tags, readers, and a centralized database. It is generally considered that the database and the readers are connected online all together through a secure channel, and therefore they form one unique entity, the reader.

We denote as a tag, as the reader, and as the reader’s database. A tag is able to communicate with when it enters into ’s electromagnetic field. Then both reader and tag can participate together to an RFID protocol execution . This protocol can be an identification or an authentication protocol. We define an -pass RFID protocol as being a protocol where messages are exchanged between and .

The reader is a powerful transceiver device whose computation capabilities approach the ones of a small computer. A tag is a transponder with identifier . Its memory can vary from a hundred of bits (as for EPC tags [1]) to a few Kbytes (such as contactless smartcards [2, 3]). Its computation capabilities are generally much lower than a reader, but, depending on the tag, it can perform simple logic operations, symmetric-key cryptography, or even public-key cryptography. A tag is considered as legitimate when it is registered in the database as being an authorized entity of the system. The database stores, at least, the identifier and potentially a secret of each legitimate tag involved in the system.

2.2. Basic Definitions

First, we define as the security parameter of the system and as a polynomial function. Thus, we define as being a negligible function in if, for every positive function , there exists an integer such that, for all , .

Then, we define all the different entities that may play a role in the presented privacy models. An adversary is a malicious entity whose aim is to perform some attacks, either through the wireless communications between readers and tags (e.g., eavesdropping), or on the RFID devices themselves (e.g., corruption of a device and obtaining all the information stored on it). The adversary advantage is the success measure of an attack performed by . In some models, is requested to answer to a kind of riddle, which is determined by an honest entity, called challenger . A challenge tag is a tag which is suffering from an attack performed by . It can be chosen either by or by .

Generally, a modelization with oracles is used to represent the possible interactions between and the system. Thus, carries out its attack on the system, performing some queries to the oracles that simulate the system. The generic oracles used in the presented privacy models are detailed in Section  2.4.

We consider that is able to play/interact with a tag when this last one is in ’s neighborhood. At that moment, the tag is called by its pseudonym (not by its identifier ). During an attack, if a tag goes out and comes back to ’s neighborhood, then it is considered that its pseudonym has changed. This notion is detailed in the Vaudenay model [22] (see Section 5). The same case happens when a set of tags is given to the challenger : when gives the tags back to , their pseudonyms are changed.

2.3. Procedures

Most of the models studied in this paper focus on an RFID system based on an anonymous identification protocol implying a single reader and several tags. The system is generally composed of several procedures, either defining how to set up the system, the reader, and the tags, or defining the studied protocol. One way to define these procedures is detailed in the following. Note that this is just a generalization but it may be different in some models. (i) defines ’s parameters (e.g., generating a private/public key pair ) depending on the security parameter . It also creates an empty database which will later contain, at least, the identifiers and secrets of all tags. (ii) returns , that is, the secret of the tag with identifier .   is stored in the database of the reader. (iii) is a polynomial-time interactive protocol between the reader and a tag , where ends with a private tape . At the end of the protocol, the reader either accepts the tag (if legitimate) and , or rejects it (if not) and .

2.4. The Generic Oracles

An adversary is able to interact/play with the system with the following oracles. First, it can setup a new tag of identifier .(i)CreateTag creates a tag with a unique identifier . It uses to set up the tag. It updates , adding this new tag.

can ask for a full execution of the protocol on a tag . (i)Execute executes an protocol between and . It outputs the of the protocol execution , that is the whole list of the successive messages of the execution .

Also, it can decompose a protocol execution, combining the following oracles. (i)Launch makes start a new protocol execution . (ii)SendReader sends a message to in the protocol execution . It outputs the response of the reader. (iii)SendTag sends a message to . It outputs the response of the tag.

Then, can obtain for the reader’s result of a protocol execution . (i)Result: when is completed, it outputs if , and otherwise.

And finally, it can corrupt a tag in order to recover its secret. (i)Corrupt returns the current secret of .

If the conditions of the oracles’ uses are not respected, then the oracles return . Note that these definitions are generic ones. Some models do not use exactly the same generic oracles: in those cases, some refinements will be provided on their definitions.

3. Avoine [9], 2005

In 2005, Avoine proposed the first privacy model for RFID systems. The goal was to analyze the untraceability notion of 3-pass protocols following the idea of communication intervals: the adversary asks some oracles’ queries on specific intervals of the targeted tags lives. The privacy notion behind this model represents the unfeasibility to distinguish one tag among two.

3.1. The Oracles

This model considers that each tag has a unique and independent secret, and that, at the initialization of the system, already stores all the tags’ secrets, that is, a has already been performed on every tag.

Then has only access to the following modified generic oracles adapted for 3-pass protocols. Instead of using the entities’ names, Avoine uses the protocol executions names. Since and can run several protocol executions, (resp., ) denotes the (resp., ) execution of (resp., ). These notations favor the precise description of ’s and ’s lifetimes.(i)SendTag sends a request to , and then sends the message after receiving ’s answer . This is done during the execution of .(ii)SendReader sends the message to in the protocol execution . It outputs ’s answer .(iii)Execute executes a whole execution of the protocol between and . This is done during the execution of and the execution of . obtains the whole .(iv)Execute* this is the same as the normal Execute. But it only returns the -, that is, the messages sent by .(v)Corrupt: returns the current secret of when the tag is in its execution.

The goal of the Execute* oracle is to simulate the fact that the forward channel (from reader to tag) has a longer communication range than the backward channel (from tag to reader) and therefore can be easily eavesdropped. It formalizes the asymmetry regarding the channels.

Two remarks are of interest for the Corrupt oracle. First, Corrupt can be used only once by . After this oracle query, cannot use the other oracles anymore. Second, Corrupt is called on the tag execution number, and not the tag itself. This allows to specify exactly the targeted moment of the tag’s life.

During its attack, has access to the oracles = {SendTag, SendReader, Execute, Execute*, Corrupt}.

Avoine denotes as being the result of an oracle query on : therefore SendTag, Execute, Execute*, Corrupt. Avoine defines an interaction as being a set of executions on the same tag during an interval when can play with . Formally, SendReader, where . By this definition, the length of is .

Avoine also defines a function Oracle which takes as parameters a tag , an interval , and the oracles , and which outputs the interaction that maximizes ’s advantage.

3.2. Untraceability Experiments

Avoine defines two experiments to represent two untraceability notions. They depend on and , which represent, respectively, a reference length and a challenge length and which are function of the security parameter .

The first experiment given in Box 1 works as follows. First, receives the interactions of a tag during an interval that it chooses. Then, it receives the interactions of the challenge tags and , also during the intervals and that it chooses, such that or . This last information is unknown to . Additionally here, none of these two intervals and cross the interval of . At the end, has to decide which one of the challenge tags is the tag .

The second experiment given in Box 2 has the same mechanism. The only difference is that, now, is the one that chooses the intervals and of the challenge tags, and not anymore.

3.3. Untraceability Notions

From the experiments defined above, the notions of and are extended in this model, depending on restrictions about the choices of and . is when chooses and , whereas is when chooses them. Then, if (resp., ), that means and take place after (resp., before) , with respect to the lifetime of the system. (i)If (resp., ) chooses and such that , then it is denoted (resp., ). (ii)If (resp., ) chooses and such that , then it is denoted (resp., ).

The notion of when the Corrupt oracle is used is called .

Definition 1 (untraceability [9]). An RFID system is said to be - (for , , ) if, for every adversary ,

Direct implications are made from these notions:

4. Juels and Weis [16], 2007

Two years after Avoine’s publication, Juels and Weis proposed a new privacy model, referred in the sequel as JW, based on indistinguishability of tags. It intended to analyze classical challenge/response protocols based on symmetric-key cryptography (with possible additional messages in order to update the tags keys).

In their article, the authors highlighted that the Avoine model lacks two important features. Firstly, they proved that it is unable to catch an important attack on systems where tags have correlated secrets, because Avoine’s adversary can only play with two tags. Secondly, they showed that Avoine did not have hindsight regarding all the possible attacks that can be performed on a protocol. The Avoine model does not capture all the relevant information that can be extracted from a protocol execution. For instance, it does not consider that has access to any execution result. However, this simple “side information bit” allows formalizing a special kind of attacks on desynchronizable protocols like OSK, as explained in Appendix  B.3. and in [24]. Therefore, the JW model aimed to fill that gap.

4.1. Oracles

At the initialization of the system, already stores all the tags’ content, that is, a has already been performed on every tag. Then has access to the generic oracles Launch SendTag and SendReader, with the difference that the Output of SendReader includes the output of Result. It has furthermore access to the following oracles. (i)TagInit: when receives this query, it begins a new protocol execution and deletes the information related to any existing execution. (ii)SetKey: when receives this query, it outputs its current key and replaces it by a new one, .

The SetKey oracle is equivalent to the Corrupt oracle given in Section  2.4 in the sense that it reveals to the tag’s current key. Note that its use and its result have an interesting feature: is able to put any new key in the targeted tag: either the revealed one or a random one (that can be illegitimate).

4.2. Privacy Experiment

Let , , and be, respectively, the numbers of Launch, computation steps (represented by the SendReader and SendTag queries), and TagInit that are allowed to . Let be the total number of tags involved in the system . The privacy experiment is given in Box 3.

4.3. Privacy Notions

From the previous experiment, the JW model defines the following privacy property, where , , and can be function of the system security parameter .

Definition 2 (-privacy [16]). A protocol initiated by in an RFID system with security parameter is -private if, for every adversary ,

Considering a variant of experiment where the “except ” is removed from step (6.b), then --privacy can be defined in the same way as the previous definition.

Note that, if uses SetKey to put an illegitimate key in a tag, then this last one will possibly no longer be authenticated successfully by the reader. Nevertheless, whether this is performed on the nonchallenge tags or on (only for the --privacy experiment), this does not help to find more easily the bit and thus does not influence its success to win the experiment.

5. Vaudenay [22], 2007

Later the same year, Vaudenay proposed formal definitions for RFID systems and adversaries and considered that a system can be characterized by two notions: security and privacy. In this paper, we only present the privacy notion. Vaudenay’s article followed some joint work done with Bocchetti [25], and its goal was to propose a comprehensive model that can formalize a wide range of adversaries. This characteristic is missing in the previous models and turns to be an asset of the Vaudenay model.

This model defines tags with respect to the adversary possibility to interact with them, as explained in Section  2.2. Clearly, when a tag is within ’s neighborhood, it is said to be and has a pseudonym so that is able to communicate with the tag. In the opposite situation, a tag is said to be (i.e., not ), and cannot communicate with it. Consequently, the model considers that, at any given time, a tag can be either or . For example, the same tag with identifier which is drawn, freed, and drawn again has two pseudonyms: sees two different tags. Additionally, all the tags may not be accessible to during all the attack: for instance, may only play with two () tags during its attack.

5.1. Oracles

Contrary to the other previous models, is empty at the initialization of the system. Then has access to all the generic oracles defined in Section  2.4. The only modification done on these ones is that can create a fake tag with CreateTag. In that case, no information related to this tag is stored in . It can also query the following ones.(i)DrawTag: following the distribution probability (which is specified by a polynomially bounded sampling algorithm), it randomly selects tags between all the existing (not already ) ones. For each chosen tag, the oracle assigns to it a new pseudonym, denoted , and changes its status from to . Finally, the oracle outputs all the generated temporary tags in any random order. If there is not enough tags (i.e., less than ), or tags already , then the oracle outputs . It is further assumed that this oracle returns bits () telling if each of the tags is legitimate or not. All relations are kept in an a priori secret table denoted .(ii)Free moves the tag from the status to the status . is unavailable from now on.

5.2. Privacy Experiment

From the oracles given above, Vaudenay defines five classes of polynomial-time adversary, characterized by ’s ability to use the oracles.

Definition 3 (adversary class [22]). An adversary class is said to be (i) if has access to all the oracles; (ii) if cannot use anymore a “corrupted” tag (i.e., the tag has been destroyed);(iii) if can only use the Corrupt oracle after its first query to the Corrupt oracle; (iv) if has no access to the Corrupt oracle; (v) if has no access to the Result oracle.

Remark 4. The following relation is clear: .

Note that the notion is the contrary to the one. If an adversary is not said to be , then nothing is said, but the term is implicitly meant.

Vaudenay’s privacy experiment is given in Box 4. is the adversary class, .

5.3. Privacy Notions

To define the privacy property of Vaudenay, it is first needed to define the notions of blinder (i.e., an algorithm able to simulate the answers of some specific oracles) and trivial adversary (i.e., an adversary that learns nothing about the system).

Definition 5 (blinder, trivial adversary [22]). A blinder for an adversary is a polynomial-time algorithm which sees the same messages as and simulates the Launch, SendReader, SendTag, and Result oracles to . does not have access to the reader tapes, so it does not know the secret key nor the database.
A blinded adversary is itself an adversary that does not use the Launch, SendReader, SendTag, and Result oracles.
An adversary is trivial if there exists a blinder such that

Definition 6 (privacy [22]). The RFID system is said to be -private if all the adversaries which belong to class are trivial following Definition 5.

The implications between Vaudenay’s privacy notions are as follows:

The main result of Vaudenay is that -privacy is impossible, by proving that a -private protocol is not --private. However, Vaudenay does not define which privacy level should be targeted by a protocol: it is never specified if --privacy is better or not than -privacy.

Also, it is not explicit how the blinded adversary operates. Basically, there are two options: (i) aims the same probability than , or (ii) aims the same behavior than . It is obvious that the first option allows proving the privacy of some protocols which are actually not private, but this should be correctly formalized.

5.4. Extensions of the Model
5.4.1. Model [21], 2008

Paise and Vaudenay extended the Vaudenay model to analyze mutual authentication protocols. Actually, they enriched the definition of the RFID system by introducing an output on the tag side: either the tag accepts the reader (if legitimate) and outputs , or rejects it (if not) and outputs . This formalizes the concept of reader authentication. Nevertheless, their extension does not modify the core of the Vaudenay model.

They also showed an important impossibility result: if the corruption of a tag reveals its entire state (and not only its secret ), then no RFID scheme providing reader authentication is --private. To counter this issue, they claimed that the temporary memory of a tag should be automatically erased as soon as the tag is put back as . However, this idea is not formalized in the paper.

This division between the persistent and the temporary memory of a tag has also been investigated by Armknecht et al. [26]. Based on the work of Paise and Vaudenay, they showed several impossibility results in attack scenarios with special uses of tag corruption.

5.4.2. Model [20], 2011

Ouafi presented in his thesis an adaptation of the Vaudenay model in order to counter Vaudenay’s impossibility result of -privacy. Concretely, the author proposed to incorporate the blinder with the adversary, so that the blinder has the knowledge of all the random choices and incoming messages made by the adversary. With this new definition of the blinder, Ouafi proved that -privacy can be ensured. This result is demonstrated with a public-key-based authentication protocol where the encryption scheme is IND-CCA2 secure and PA1+ plaintext-aware. (More details about these security notions can be found in [27].)

5.4.3. Other Extensions

The Vaudenay model has also been broadened in different works. In a nutshell, this is generally performed via the addition of a new oracle to the adversary capabilities (e.g., Timer in [28], MakeInactive in [29], or DestroyReader in [30]) and the corresponding new adversary class (e.g., the class when is allowed to use Timer).

6. Van Le et al. [10, 18], 2007

Also in 2007, van Le et al. introduced a privacy model in [18] (and an extended version in [10]) that is derived from the universal composability (UC) framework [31, 32] (and not on the oracle-based framework). Their aim was to provide security proofs of protocols under concurrent and modular composition, such that protocols can be easily incorporated in more complex systems without reanalyses. Basically, the model, denoted LBM in the following, is based on the indistinguishability between two worlds: the real world and the ideal one.

The transposition of RFID privacy into such a framework is a great contribution since universal composability is considered as one of the most powerful tools for security, especially when composition among several functionalities is required.

6.1. UC Security

General statements about the UC framework are briefly detailed in Appendix A for the reader nonfamiliar with the field. Here, we present the security notion provided in such a framework.

To prove that an protocol is as secure as the corresponding ideal functionality , no environment should distinguish if it is interacting with the real adversary and (i.e., the real world), or with the simulated adversary and (i.e., the ideal world). Consequently, must be well defined such that all the targeted security properties are trivially ensured. Canetti formally defines this concept in [31] as follows, where PPT denotes probabilistic polynomial time turing machine.

Definition 7 (UC-emulation [31]). A protocol UC-emulates a protocol if, for all PPT adversary , there exists a PPT simulated adversary such that, for all PPT environment , the distributions Exec and Exec are indistinguishable.

Based on this security framework, van Le et al. designed in [10, 18] several ideal functionalities to formalize anonymous authentication as well as anonymous authenticated key exchange.

6.2. Description of the LBM Model

The advantage of using this UC-based model is that all the possible adversaries and environments are considered during the security proof that can be carried out with LBM. In this paper, we only focus on the -security objective led by anonymous authentication.

6.2.1. Assumptions of an RFID System

First, the LBM model establishes that the reader is the only entity that can start a protocol execution. Then, it considers that only tags can be corrupted by an adversary . Upon corruption of a tag, obtains its keys and all its persistent memory values.

6.2.2. The LBM Ideal Functionality

This ideal functionality represents the anonymous authentication security objective of a given protocol. To do so, several parties (at least and one tag) may be involved in a protocol execution. Two parties and are said to be feasible partners if and only if they are, respectively, and a tag. In the ideal world, communication channels between tags and are assumed to be anonymous (meaning that they only reveal the type type() of a party, either tag or reader), and a sent message is necessarily delivered to the recipient. Finally, state() is the list of all the execution records, and active() is the list of all the preceding incomplete executions (Box 5).

6.2.3. Forward-Security

When the adversary corrupts a tag , it gets its identifier and is then able to impersonate this tag using the Impersonate command. A corrupted tag is thereafter considered as totally controlled by the adversary. Consequently, will no longer manage the behavior of this corrupted tag and thus will reject every Initiate command from this tag. As state() is removed after a corruption, the adversary is not able to link the related tag to its previous authentication.

However, the adversary is able to link all the incomplete protocol executions of a corrupted tag up to the last successfully completed one, based on the knowledge of active(). Thus, the ideal functionality obviously provides -security for all previous completed protocol executions.

7. Van Deursen et al. [13], 2008

The model of van Deursen et al., published in 2008, defines untraceability in the standard Dolev-Yao intruder model [33]. The untraceability notion is inspired by the anonymity theory given in [34, 35] and is used as a formal verification of RFID protocols. Such a technique is based on symbolic protocol analysis approach (and not on the oracle-based framework). This model will be called DMR in what follows.

7.1. Definition of the System

We remind below the basic definitions given in DMR.

First, the system is composed of a number of agents (e.g., Alice or Bob) that execute a security protocol, the latter being described by a set of traces. A security protocol represents the behavior of a set of roles (i.e., initiator, responder, and server), each one specifying a set of actions. These actions depict the role specifications with a sequence of events (e.g., sending or reception of a message). A role term is a message contained in an event, and it is built from basic role terms (e.g., nonces, role names, or keys). A complex term is built with functions (e.g., tupling, encryption, hashing, and XOR).

Each trace is composed of interleaved runs and run prefixes, denoted subtraces. A run of a role is a protocol execution from ’s point of view, denoted , where is a (possibly unique) run identifier. Thus, a run is an instantiation of a role. A run event is an instantiation of a role event, that is an instantiation of an event’s role terms. A run term denotes an instantiated role term. A run prefix is an unfinished run.

An adversary is in the Dolev-Yao model and is characterized by its knowledge. This knowledge is composed of a set of run terms known at the beginning, and the set of run terms that it will observe during its attack. The adversary is allowed to manipulate the information of its knowledge to understand terms or build new ones. However, perfect cryptography is assumed (i.e., cryptographic primitives are assumed unbreakable and considered as black boxes). The inference of term from term set is denoted by .

Corrupted agents are modeled. (Note that, regarding corruption, there is no restriction about the role of such an agent: it can be either a tag or a reader.) is given all the secrets of a corrupted agent in its initial knowledge. When an agent is corrupted, it is said to be “destroyed,” that is, it cannot be used during ’s attack. Yet, the security evaluation of a system is done on noncorrupted agents, that is, cannot have access to the secret of an agent after the beginning of its attack.

7.2. Untraceability Notion

First, the model defines several notions of linkability, reinterpretation, and indistinguishability, before giving the untraceability one.

Definition 8 (linkability of subtraces [13]). Two subtraces and are linked, denoted by , if they are instantiated by the same agent:

The notion of reinterpretation has been introduced in [34] in order to show that subterms of a message can be replaced by other subterms if the adversary is not able to understand these subterms. Note that, when is able to understand a subterm, it remains unchanged.

Definition 9 (reinterpretation [13]). A map from run terms to run terms is called a reinterpretation under knowledge set if it and its inverse satisfy the following conditions: (i) if is a basic run term, (ii) if is -tuple, (iii) if or , and is an encryption under key , (iv) if or is not a hash function.

Reinterpretations are used to define indistinguishability of traces.

Definition 10 (indistinguishability of traces [13]). Let be the adversary’s knowledge at the end of trace . The trace is indistinguishable from a trace , denoted , if there is a reinterpretation under , such that for all roles and subtraces .

From all the above notions, the untraceability notion of a role is defined as follows.

Definition 11 (untraceability [13]). An protocol is said to be untraceable with respect to role if: In this paper, if no role is specified, we consider that “untraceability” means “untraceability for role .”

8. Canard et al. [11, 36], 2010

In the same vein as the Vaudenay model, Canard et al. proposed in 2010 a security model that comprises the properties of (strong) correctness, soundness, and untraceability. We only present the last notion. Contrary to Vaudenay, the authors only defined untraceability (and not privacy in general) and their main goal was to use the strongest adversary of the Vaudenay model. During the following, this model will be denoted CCEG.

8.1. Oracles

As for Vaudenay, is empty after the setup of the system, and a tag can be either or . Then has access to all the generic oracles. It may also use the following ones.(i)DrawTag works similarly as the one of Vaudenay. It first randomly and uniformly selects tags between all existing (not already ) ones. For each chosen tag, the oracle gives it a new pseudonym denoted by and changes its status from to . Finally, since cannot create here fake tags, then the oracle only outputs all the generated pseudonyms in any order. If there is not enough tags (i.e., less than ), then the oracle outputs . All relations are kept in an a priori secret table denoted by . (ii)Free works exactly as the one of Vaudenay.

8.2. Untraceability Experiment

From the oracles given above, CCEG defines three classes of polynomial-time adversaries for the untraceability experiment.

Definition 12 (adversary class [11]). An adversary class is said to be(i) if has access to all the oracles; (ii) if cannot use anymore a “corrupted” tag (i.e., the tag has been destroyed); (iii) if has no access to the Corrupt oracle;

The authors do not define the adversary class introduced in the Vaudenay model (see Section 5 for more details). They consider that the model aims to be as powerful as possible: the notion weakens the adversary.

A link is a couple of pseudonyms associated to the same identifier in . Some links are considered obvious (e.g., both and have been corrupted). Therefore, the authors define the notion of nonobvious link. As remark, links are chronologically ordered, that is, means that has been freed before has been drawn.

Definition 13 (nonobvious link (NOL) [11]). is a nonobvious link if and refer to the same in and if a “dummy” adversary , that only has access to CreateTag, DrawTag, Free, and Corrupt, is not able to output this link with a probability better than . Moreover, a nonobvious link is said to be (i)standard if has not corrupted or ; (ii)past if has corrupted ; (iii)future if has corrupted .

Note that this model uses a “dummy” adversary , instead of a blinded adversary as in the Vaudenay model. Both adversaries are equivalent but not identical. Indeed, the main difference is that Vaudenay’s blinder is an entity clearly separated from . Therefore does not know the random choices done by the during the experiment. On the opposite in CCEG, is a single entity, and consequently it is aware of its random choices.

A adversary is only able to output a standard NOL as it cannot query the Corrupt oracle. A adversary is not able to output a future NOL as a tag corruption destroys the tag (and thus prevents the tag from being again). However, this adversary can output a standard or past NOL. Then, a adversary is able to output every NOL.

CCEG’s untraceability experiment is given in Box 6. is the adversary class, .

8.3. Untraceability Notions

With the previous experiment, the CCEG untraceability of a system is proved if no adversary is able to output a NOL with a probability better than the one of the dummy adversary .

Definition 14 (untraceability [11]). An RFID system is said to be -untraceable (resp., -untraceable/-untraceable) if, for every (resp., /) adversary running in polynomial-time, it is possible to define a “dummy” adversary that only has access to oracles CreateTag, DrawTag, Free, and Corrupt such that

Direct implications are made from these notions:

The main result of this paper is that -untraceability (the strongest privacy property) is achievable.

9. Deng et al. [12], 2010

Also in 2010, Deng et al. proposed a new framework based on zero-knowledge formulation to define the security and privacy of RFID systems. Here, we only present the zero-knowledge privacy (denoted -privacy), which is a new way of thinking in privacy for RFID. This model, denoted DLYZ in the sequel, is part of the unpredictability models family [12, 14, 17, 19]. They all rely on the unpredictability of the output returned by a tag or a reader in a protocol execution. In this paper, we decide to only present DLYZ since it is the most achieved model of this family.

9.1. Considered Protocol

This model considers that an RFID protocol execution is, w.l.o.g., always initialized by , and consists of rounds for some . Each protocol execution is associated to a unique identifier . At each execution, a tag may update its internal state and secret key, and may update its internal state and database. The update process (of the secret key or the internal state) on a tag always erases the old values. The outputs bits and (equal to 1 if and accept the protocol execution with identifier , or 0 otherwise) are publicly known. Note that the authors claim that each tag has its output bit if the authentication protocol is not mutual. However, we consider this fact too limiting since can have an output (possibly known by ), even if it may not authenticate the reader. For instance, its output can be “I arrived correctly at the end of the protocol on my side.”

DLYZ assumes that a tag may participate to at most executions in its life with ; thus is involved in at most executions, where is polynomial in and is the total number of tags involved in the system.

9.2. Oracles

In a nutshell, DLYZ aims to analyze protocols where entities’ secrets may potentially be updated at every protocol execution. Therefore, the model automatically enumerates the internal information of each entity. At the initialization of the system, the database is in an initial state, called , and already stores the secrets of all the tags, that is, a has already been performed on every tag. The only differences in the initialization are the following: (i) additionally generates ’s initial internal state ; (ii) associates to every tag a triplet , which is, respectively, ’s public parameter, initial secret key, and initial internal state.

This information is stored in . Finally, let denote the public parameters of the system . At the end of the system’s initialization, all the tags are accessible to the adversary.

Then, has access to the following modified generic oracles. (i)Launch makes launch a new protocol execution and generates the -round message which is also used as the execution identifier . If this is the new execution run by , then stores into its internal state .(ii)SendTag sends to . The output response of is as follows. (1)If currently does not run any execution, then (a)initiates a new execution with identifier , (b)treats as the -round message of the new execution, (c)and returns the -round message . (2)If is currently running an incomplete execution with identifier and is waiting for the message from (), then works as follows: (a)if , treats as the message from and returns the next round message ; (b)if (i.e., the last-round message of the execution), returns its output and updates its internal state to (where corresponds to the execution run by , where ).(iii)SendReader sends to for the execution with identifier . After receiving , checks from its internal state whether it is running such an execution, and ’s response is as follows. (1)If is currently running an incomplete execution with identifier and is waiting for the message from a tag (), then works as follows: (a)if , treats as the message from the tag and returns the next round message ; (b)if , returns the last-round message and its output and updates its internal state to and the database (where corresponds to the execution run by ).(2)In all the other cases, returns (for invalid queries).(iv)Corrupt returns the secret key and the internal state currently held by . Once is corrupted, all its actions are controlled and performed by .

For a completed protocol execution with identifier , the transcript of the exchanged messages is , excluding the entities’ outputs.

Let denote the set of these four oracles. denotes a PPT adversary that takes on input the system public parameters , the reader , and the tags set of the already initialized system. Then interacts with and the tags of via the four oracles. denotes a PPT adversary equivalent to , where generally includes or some historical state information of . Then interacts with and the tags set via the four oracles. is said to have a blinded access to a challenge tag if it interacts with via a special interface (i.e., a PPT algorithm which runs internally and interacts with externally). To send a message to , sends a SendTag to ; then invokes with SendTag and answers ’s output to . does not know which tag is interacting with it. interacts with via SendTag queries only.

Definition 15 (clean tag [12]). A tag is said to be clean if it is not corrupted (i.e., no query to Corrupt on ) and is not currently running an incomplete execution with (i.e., ’s last execution is either finished or aborted).

The main goal of this definition is to force the adversary to use some uncorrupted and nonrunning tags to proceed the -privacy experiment (see next section). This notion of nonrunning tags is very similar to the TagInit oracle of JW.

9.3. Privacy Experiments

In the experiments, a PPT CMIM (concurrent man-in-the-middle) adversary (resp., PPT simulator ) is composed of a pair of adversaries (resp., ) and runs in two stages. Note that, if , then no challenge tag is selected, and is reduced to in the experiment.

The first experiment given in Box 7 is the one performed by the real adversary . After the system initialization, plays with all the entities and returns a set of clean tags . Then from this set , a challenge tag is chosen at random. Then plays with all the entities, including the challenge tag via the interface , except the set of clean tags. At the end, outputs a view of the system.

Then, the second experiment given in Box 8 is the one performed by the simulator . As in the previous experiment, plays with all the entities and returns a set of clean tags . Then from this set , a challenge tag is chosen at random, but is not informed about its identity and cannot play anymore with this tag. Then plays with all the entities, except the set of clean tags. At the end, outputs a simulated view of the system.

9.4. Privacy Notions

From the previous experiments, the -privacy of a system is proved when no one is able to distinguish if it is interacting with the real world or with the simulated one.

Definition 16 (-privacy [12]). An RFID system satisfies computational (resp., statistical) -privacy if, for any PPT CMIM adversary , there exists a polynomial-time simulator such that, for all sufficiently large and any which is polynomial in , the following ensembles are computationally (resp., statistically) indistinguishable: (i), (ii).
That is, for any polynomial-time (resp., any computationally power unlimited) algorithm , it holds that

The probability is taken over the random coins used during the system initialization, the random coins used by , , , and all (uncorrupted) tags, the choice of , and the coins used by the distinguisher algorithm .

Definition 17 (/--privacy [12]). Let us denote (resp., ) the final (resp., initial) secret key and internal state of the challenge tag at the end (resp., beginning) of . An RFID system is (resp., )--private if, for any PPT CMIM adversary , there exists a polynomial-time simulator such that, for all sufficiently large and any which is polynomial in , the following distributions are indistinguishable: (i), (ii).
It is required that should remain clean at the end of . Note that is allowed to corrupt it after the end of .

One justification of the authors on the way of corrupting is that it is enough to give its secrets to at the end. Another reason pointed out by the authors is that - or --privacy cannot be achieved if corrupts before the end of the experiment.

10. Hermans et al. [15], 2011

Following the path opened by Vaudenay with his privacy model, Hermans et al. presented in 2011 a new model, denoted here HPVP, based on indistinguishability between two “worlds”: it is most commonly called the “left-or-right” paradigm.

The main goal of the authors was to propose a model with a clear defined purpose, that is straightforward to use for proving privacy. Also as CCEG, HPVP aimed to use Vaudenay’s strongest adversary.

10.1. Oracles

As for Vaudenay and CCEG, is empty after the initialization of the system, and a tag can be either or . Then has access to the generic oracles CreateTag (here it additionally returns a reference to the new created tag), SendReader, and Result. Then, has also access to these other following oracles. (i)DrawTag generates a tag and stores in a table . Depending on the bit chosen at the start of the privacy experiment (see next section), will either reference or . If one of the two tags is already referenced in , then it outputs .(ii)Freeb recovers the tuple in . If then it resets , otherwise it resets . Then it removes the tuple from . When a tag is reset, its volatile memory is erased, not its nonvolatile memory (which contains its secret ).

This specific definition of the Free oracle comes from one important statement highlighted by Paise and Vaudenay in their model (see Section  5.4 for more details).

Finally has access to the following modified generic oracles. (i)Launch makes launch a new protocol execution , together with ’s first message .(ii)SendTag retrieves the tuple in . It sends a message to the corresponding tag ( if , otherwise). It outputs the response of the tag. If is not found in , it returns .(iii)Corrupt returns the whole memory (including the current secret ) of . If is , it returns .

All these oracles are very similar to the ones of Vaudenay, but with important differences. First, DrawTag is only applied on two tags chosen by the adversary when it queries this oracle. Then, Free specifies clearly that it erases the volatile memory of the chosen tag. Lastly, Corrupt is only authorized on a tag. However, the intrinsic definition of a tag (given in the Vaudenay model [22]) is that it is not accessible to , since it is not in its neighborhood. Thus, it seems impossible for to query a Corrupt on a tag that it cannot manipulate (i.e., not ).

10.2. Privacy Experiment

The authors keep the same adversary classes as the ones given by Vaudenay: , , , , and .

Their privacy experiment is given in Box 9, where represents the adversary class: , .

10.3. Privacy Notions

From the previous experiment, the HPVP privacy property is based on the adversary advantage to distinguish the two worlds.

Definition 18 (privacy [15]). The RFID system is said to unconditionally (resp., computationally) provide -privacy if and only if, for all the adversaries (resp., polynomial time adversaries) which belong to class , it holds that

Note that, all along the paper, the authors claim that the already existing models do not take care about some privacy leakage information such as the cardinality of the tags’ set. Yet, they never prove nor explain how their model can handle this issue, nor why this is indeed a privacy issue.

11. Privacy Analysis of Different Existing Protocols

To investigate more deeply the differences between the presented models, we study the privacy level of five different protocols in all these models. These protocols differ according to their building blocks and their underlying key infrastructure. The first protocol [37] is based on unique long-term secret key for each tag. On the contrary in the tree-based protocol [8], tags share between them some long-term partial secret keys so as to speed up the authentication. Two protocols [18, 38] use key-update mechanisms to increase the privacy level in case of tag corruption. In particular, the second one [18] provides mutual authentication in order to be undesynchronizable. The last analyzed protocol [22] is based on public-key cryptography. Due to their differences, these protocols may thus ensure different privacy levels. However, we will show in this section that some models assign the same privacy level to some protocols while other models clearly differentiate them, for example, by taking into account an attack which cannot be modeled in other models.

In the following, a tag has a unique identifier and should be authenticated by a legitimate reader .

11.1. Analyzed Protocols

The five RFID protocols chosen for this study are sketched in the following. Their complete descriptions and whole privacy analyses are detailed in Appendix B.

11.1.1. SK-Based Challenge/Response Authentication Protocol

The first studied protocol is the ISO/IEC 9798-2 Mechanism 2 [37] based on a PRF with an additional nonce chosen by the tag. A tag has a unique secret key known by , used for the authentication. All the tags’ keys are independent.

11.1.2. Tree-Based Authentication Protocol

It is based on the key-tree infrastructure given by Molnar and Wagner in [8]. Basically in a system of tags, a key-tree is generated with leaves, where is its depth and is its branching factor. Each leaf is randomly associated to a tag of the system, and each node is associated to a partial unique secret key where is the depth of the node and the branch.

We define w.l.o.g. the path in the tree from the root (denoted ) to the leaf (denoted ) that is associated to the tag . At the setup of the system, is initialized with a set of partial keys , where each is the secret key attached to its path node (except the root). knows the entire tree arrangement, and thus all the keys associated to all the nodes.

The protocol is carried out in rounds. For each round, and perform a challenge/response authentication as described in Figure 2 of Appendix  B.2. If answers correctly at each round, then successfully authenticates at the end of the last round.

11.1.3. OSK-Based Authentication Protocol

The original OSK protocol [38] is an identification protocol, where there is no proof of the tag identity. At the setup, is initialized with a unique secret key shared with . All the tags’ keys are independent. just sends the result of a pseudorandom function done on its key. The main feature of OSK is that and update the shared key after each complete protocol execution.

The OSK protocol has been introduced to ensure the forward security property, that is, data sent by a given tag today will still be secure even if ’s secret is disclosed by tampering this tag in the future, contrary to the SK-based protocol. The protocol presented here (proposed in [22]) is slightly different from OSK as additionally sends a nonce to in order to prevent replay attacks, as described in [6]. The resulting protocol ensures tag authentication rather than tag identification.

11.1.4. O-FRAP Authentication Protocol

Many undesynchronizable authentication protocols [18, 24, 39] have been proposed to counter the main drawback of OSK, that is the desynchronization attack. Here, we analyze O-FRAP, introduced by van Le et al. in [18].

At the setup, is initialized with a couple containing a secret key and a nonce , such that all the couples of tags are independent. is stored by as the current secrets of . Then a mutual authentication between and is performed, where ’s key and/or nonce are updated at the end of the protocol execution by both entities. The main difference with OSK is that the tag always updates at least one value, even when the protocol is incomplete (in this case the random ).

11.1.5. PK-Based Challenge/Response Authentication Protocol

It is one of the protocols given by Vaudenay in [22]. has a pair of public/private keys , and a tag has a unique secret key known by . All the tags’ keys are independent. The encryption scheme (Enc/Dec) is considered to be either IND-CPA (indistinguishable under chosen-plaintext attack) or IND-CCA (indistinguishable under chosen-ciphertext attack) secure.

11.2. Analysis Comparison

Table 1 sums up the security analysis of the studied protocols regarding each privacy model.

11.2.1. The Lack of Comprehensiveness

In some models, several protocols are proved to ensure the same privacy level, because some attacks on these protocols cannot be formalized. For example in the Avoine model, OSK-based, O-FRAP, and PK-based protocols reach the same privacy (i.e., -RTE and -RTEC). However as detailed in Appendix  B.3, the OSK-based protocol can be desynchronized contrary to the other two, and O-FRAP is subject to a specific attack based on tag corruption (see Appendix  B.4), while the PK-based protocol is not vulnerable to such attacks. This misevaluation of privacy happens in almost all models (e.g., {SK-based, tree-based, O-FRAP} for Vaudenay, CCEG, and HPVP, or {SK-based, OSK-based, O-FRAP} for DMR). The main drawback of this fact is that system designers unfamiliar with privacy will probably choose the cheapest protocol (regarding the computing complexity), thinking that these protocols are equivalent regarding their privacy level.

11.2.2. The Case of Correlated Secrets

Nevertheless, some models have features that permit attributing different privacy levels to quite similar protocols. As an example, JW, DMR, and DLYZ point out an important characteristic of protocols based on correlated secrets: they prove that the tree-based protocol is not secure, while the SK-based one is. This comes from the fact that an adversary may know some secrets without being authorized to corrupt the challenge tags (as explained in Appendix  B.2). For instance, this adversary could be a tag owner that only knows its tags’ secrets and that is not able to corrupt other tags that it wants to trace. It is consequently normal that the SK-based protocol is more private than the tree-based one. Note that this differentiation cannot be established in the Avoine, Vaudenay, CCEG, and HPVP models because their adversary does not have the modularity to only corrupt certain tags. As a consequence, these models classify the SK-based and the tree-based protocols with the same privacy level.

11.2.3. The Key-Update Mechanism Dilemma

All the models (except Avoine and LBM) give the same privacy level for the SK-based protocol and for O-FRAP. This is another obvious example about the issue related to the privacy definitions of these models. Indeed, the two protocols do not manage the tags’ secrets in the same way: a tag updates one of its secrets each time it starts an execution of O-FRAP, while a tag always keeps the same secret when it runs the SK-based protocol. For O-FRAP, the attack presented in Appendix  B.4 only permits linking a freshly corrupted tag to its last previous incomplete protocol execution. But all the previous completed ones are unlinkable. This is not the case with the SK-based protocol, where a tag corruption allows tracing the tag at any time (past or future). This obvious distinction of the two protocols is however not highlighted by most of the models.

11.2.4. Accuracy Refinement of the NARROW Adversary

The nuance provided in some models permits granting some protocols with a reasonable privacy level. For instance, Vaudenay and HPVP confer --privacy on the OSK-based protocol and --privacy on the IND-CPA-PK-based protocol, while some other models argue that the OSK-based protocol ensures no privacy at all or that the IND-CPA-PK-based protocol cannot be proved private. These last claims are highly restrictive since these two protocols are clearly more private than the dummy identification protocol where tags send their identifier in the clear.

11.2.5. The Vaudenay Problem

Finally, Vaudenay proved in [22] that the highest privacy level of his model cannot be achieved. Yet, the highest privacy level of all the other seven presented models can be reached, at least with the IND-CCA-PK-based protocol. To the best of our knowledge, Ouafi is the only author who tries to explain in [20] that the Vaudenay model (i) does not reflect the exact notion of privacy that was targeted at first sight and (ii) may englobe more than only privacy. As explained in Section  5.4, Ouafi reformulates the Vaudenay model in order to achieve -privacy.

12. Classification of the Models

In this section, we compare the different features of all the privacy models presented in this paper. We point out which model(s) is(are) the most appropriate to use according to whether one of these features is wished or not. Table 2 sums up the features that are achieved by each model.

Note that “protocols” (resp., “tag-init protocols”) refer to authentication/identification protocols where the reader (resp., tag) is the only entity that can start a protocol execution.

12.1. Adversary Experiment

Privacy models can be compared according to the similarities and differences of their experiment. To do so, we first need to define the notion of challenge tags in some models. Indeed, Vaudenay, LBM, DMR, CCEG, and HPVP do not stipulate this specific notion in their experiment. However, since their adversary must use some tags for its attack, we consider that all the tags are challenge ones. Note that the agents that can be corrupted before ’s attack in the DMR model are considered as nonchallenge tags.

12.1.1. Number of Tags Allowed in the Experiment

Vaudenay, LBM, and CCEG are the only models where the adversary is free to play with all the tags of the system at the same time during its attack.

At one moment of their experiment, JW and HPVP can only play with at most tags (where is the total number of tags of the studied system). For the DLYZ model, the adversary cannot play with the set of clean tags it chose, except with the challenge tag picked at random in this set. If this set contains only two tags, it can however play with at most tags. Then, DMR’s adversary cannot play with the agents that were corrupted before the beginning of its attack. Finally, the Avoine model is the most limiting one, since can only play with two tags. This fact prevents the Avoine model from analyzing protocols with correlated secrets, which is not the case for all the other models.

Therefore, if is allowed to play with all the tags of the system, then it is preferable to use the Vaudenay, LBM, and CCEG models for the privacy analysis.

12.1.2. Choice of the Challenge Tags

All the models (except the Avoine one) allow to choose the challenge tags of its attack. In the Avoine model, the challenger is the entity that performs this task, choosing , , and (such that or ). has no option on the tags used for its attack: it is weaker than the adversaries of the other models. Thus, if it is considered that has the possibility to choose the challenge tags, protocol should be analyzed with all the models except the Avoine one.

12.1.3. Attack on Incomplete Protocol Executions

In the JW, Vaudenay, DMR, CCEG, DLYZ, and HPVP models, is allowed to perform its attack on incomplete protocol executions. As illustrated in Appendix  B.4, it can start an execution with a tag and not finish it. Afterward, it can use this tag during its game to break its privacy. If succeeds to do so, then the protocol is not considered as private.

For LBM, such an attack is not taken into account. is designed such that all the successfully completed protocol executions of a tag are protected against corruption. In other words, cannot learn any information about these previous executions, and thus the privacy of a tag is ensured. However, it is authorized to link the previous incomplete executions of a corrupted tag up to the last completed one without compromising the security.

For the Avoine model, both scenarios are allowed. During the game, chooses the intervals and of the challenge tags that help it the most to perform its attack. It can choose and such that these intervals are directly consecutive to (the interval of the targeted tag ). In that case, nothing prevents from using incomplete protocol executions during the experiment. For the game, the challenger is the one that chooses and that help the less, contrary to the game. If uses incomplete protocol executions, then can choose nonconsecutive intervals such that the incomplete executions remain meaningless to (as for LBM). For instance, some completed executions may separate the executions (completed or not) performed within the intervals.

Therefore, if a protocol must be protected against this attack, then Avoine, JW, Vaudenay, DMR, CCEG, DLYZ, and HPVP are the most appropriate models to study its privacy. If such a feature is not wished, then it can be analyzed with the Avoine and LBM models. Note that the Avoine model is the most flexible one since it can handle both scenarios.

12.2. Tag Corruption

The tamper resistance of RFID tags is a highly questionable assumption. Fortunately, all the models are flexible regarding the capacity of an adversary to corrupt tags. The two extreme cases are the impossibility to corrupt tags or the possibility to perform this action without restrictions. Yet, as detailed in the previous sections, intermediate levels of corruption have been introduced. To have an overall view of these levels, the models are gathered below based on their similarities from the weakest corruption level to the strongest one.

12.2.1. Weak Adversary

Obviously the weakest corruption level is when is not allowed to corrupt tags. This feature is present in the Avoine, Vaudenay, LBM, CCEG and HPVP models. It permits formalizing the assumption of tags tamper resistance.

Although the JW, DMR and DLYZ models consider that it is always possible to corrupt non-challenge tags, they also define a weak level of corruption where is not able to corrupt the challenge tags. This adversary, called insider adversary in [40], may be a tag owner that only knows its tags’ secrets and that wants to break the privacy of other tags. As explained in Section  11.2 and in Appendix  B.2, this subtle adversary can be used to perform a dedicated attack on a system with correlated secrets. However, even if this attack can be caught in other models by an overpowerful adversary (e.g., Vaudenay’s adversary), the Vaudenay, LBM, CCEG, and HPVP models are unable to precisely formalize such an intermediate adversary, since these models allow to corrupt either every tag or any tag at all.

Therefore on the one hand, if it is assumed that can never corrupt a tag, then the Avoine, Vaudenay, LBM, CCEG, and HPVP models should be chosen for a protocol analysis. On the other hand, if it is assumed that only the nonchallenge tags can be corrupted, then the most appropriate and fair models to use are JW, DMR, and DLYZ.

12.2.2. Nonadaptive Adversary

A higher level of corruption consists in authorizing to only corrupt tags at the end of the experiment. It corresponds to the adversary of Vaudenay and HPVP and to the notion of Avoine. It can be viewed as a nonadaptive corruption ability as, except other corruptions, cannot adapt its attack according to the corruption result.

The --privacy of DLYZ is close to this property since the last key of the challenge tag is given to the distinguisher at the end of the experiment. Yet in this case, is still allowed to adaptively corrupt the nonchallenge tags during the experiment without stopping it. This fact slightly increases the strength of DLYZ’s adversary.

12.2.3. Destructive Adversary

To increase the adversary power, some models give the ability to pursue its attack after a corruption, leading to adaptive attacks regarding corruption. However, some constraints are still put into place in some models. In fact, the JW model considers that the challenge tags may be corrupted in the --privacy, but only during the challenge phase. In other words, a tag corruption can only be used to trace its previous interactions. It is thus possible to establish a parallel between this constraint and the destructive corruption ability defined in other models (i.e., the adversary of Vaudenay, CCEG and HPVP, and the -security of LBM). Indeed, the key material obtained through a tag corruption may allow tracing its previous interactions but not the future ones as the tag is destroyed.

12.2.4. Strong Adversary

The strongest level that can be defined is obviously when has no restriction regarding tag corruption. This corresponds to the adversary defined in the Vaudenay, CCEG, and HPVP models. A relatively similar notion is also defined by DLYZ, namely, the --privacy. However, as for the --privacy, while every nonchallenge tag may be corrupted during the experiment, the challenge tag cannot, and its initial key is only revealed at the end of the experiment. It may still help to distinguish the following interactions of this tag, but cannot adapt its attack to this result. This consequently leads to a nonadaptive adversary that may be useful in some cases. Nevertheless, one may prefer the Vaudenay, CCEG, and HPVP models to catch the strongest adversary definition regarding corruption ability.

As a conclusion, the Vaudenay, CCEG, and HPVP models offer a wider adversary granularity regarding tag corruption. (Note that the CCEG's authors consider that and adversaries (in Vaudenay's sense) are equivalent in their experiment: both are able to output a standard or past NOL, but not a future NOL. Therefore, a adversary is useless in their model.) Only these three models take into account the strongest adversary which can corrupt with no restriction. Nevertheless, they do not consider the insider adversary that represents a relevant assumption and affords, to our mind, an interesting granularity for some analyses. In this case, protocols may thus be studied with a more appropriate model, namely, either JW, or DMR, or DLYZ.

12.3. Other Features

The remaining features of Table 2 are discussed in the following.

12.3.1. NARROW/WIDE Adversaries

As previously said, an adversary is said to be (resp., ) when it does not (resp., does) receive the result of a protocol execution. Several models restrict their adversary with one of these features.

Avoine does not define a Result oracle, and there is no equivalence of such an oracle in DMR (since does not know if a protocol between two agents succeeds). Both models only consider adversaries.

On the contrary, the adversaries of JW, LBM, CCEG, and DLYZ are only ones. For JW, there is no Result oracle defined in the model, but the adversary is forced to obtain the result of a protocol execution via the output of each SendReader. The DLYZ’s adversary has the same behavior: it is forced to know this result information since and are public. In the LBM model, the output tape of each party is always available to . Additionally, the adversary may also learn it as can communicate arbitrarily with it. Thus, it is impossible to model a adversary since the distinguisher may always know the result of a protocol execution. For CCEG, no adversary can be used for the untraceability experiment. Yet, as stressed in OSK’s analysis given in Appendix  B.3, this voluntary restriction implies that this kind of protocols with decent security features are not considered private.

The Vaudenay and HPVP models are the most flexible ones since it is possible to choose either a or a adversary. Note that the other models can however be (more or less easily) adapted to provide both adversary classes.

12.3.2. Channels Asymmetry

As already explained in Section 3, the forward channel (reader to tag) has a longer communication range than the backward channel (tag to reader). This characteristic is of interest as it has been shown in [41] that the former can be more easily eavesdropped than the latter in practice. Yet, the Avoine model is the only one that formalizes this feature through the Execute* oracle: may only obtain the messages sent by on the forward channel.

All the other models (as a matter of fact, created after the Avoine one) lost this feature and cannot represent this kind of weaker but realistic adversary. Thus, assuming that is only able to get the messages sent from , the analysis must be performed with the Avoine model.

12.3.3. Analyzable Protocols

Some models are designed “by default” to analyze specific identification/authentication protocols. In the Avoine model, the oracles to interact with the system can only be used for 3-pass protocols. Then, JW’s authors only aim to analyze protocols based on symmetric-key cryptography. Finally, DLYZ can only analyze -pass protocols with .

On the contrary, Vaudenay, LBM, DMR, CCEG, and HPVP can analyze any identification/authentication protocol. Some of the restrictive models can nevertheless be adapted to analyze most existing protocols. For instance, the Avoine model can be slightly modified to analyze 2-pass classical challenge-response protocols, and the JW model does not forbid the analysis of protocols with public-key cryptography.

Finally, considering protocols where the tag starts an execution, JW and DMR are the only models that are not restricted by default to analyze such protocols.

13. Privacy Properties

In the previous section, we discussed the features that are present (or not) in each of the studied models. To conclude the investigation, we go a step further and compare the privacy properties between them.

This task is not an easy one as the different features of each model make it tough to compare them in some cases. Indeed in the following section, we highlight the fact that, when a privacy property of a given model is said to be “stronger” than the one of another model, the “weaker” model may present some features that are not present in the “stronger” one. We assume that system designers are aware of this fact and that, in this special case, they may thus prefer to use the weaker model for their privacy analysis. Except when this fact must be highlighted, we will not detail it in each comparison.

13.1. Indistinguishability of Tags

Regarding only the privacy notions, the Avoine and JW models are really close. Indeed, they both define privacy as the unfeasibility for an adversary to recognize one tag among two. The JW model has been designed after the Avoine one, as an improved model since it takes into account several flaws of the Avoine model. It can be easily proved that JW’s -privacy (resp., --privacy) implies Avoine’s (resp., ): the goal is the same and any request of an Avoine’s adversary can be performed by a JW’s adversary.

In the DMR model, the privacy property corresponds to the unfeasibility to link two traces that are produced by the same agent (in our case, a tag). This notion is also really close to the one defined in the JW model. Clearly for JW, the adversary capacity to retrieve the tag associated to the bit permits linking two traces and reciprocally. However, as the DMR model only defines a nonadaptive adversary regarding corruption, JW’s -privacy is obviously stronger than DMR’s untraceability.

Largely inspired by the design of the Vaudenay model (on which we will come back later), the CCEG and HPVP models offer a comprehensive list of oracles that permit any JW’s adversary to be represented in their models. Regarding the privacy definition, it is obvious that the output of a JW’s adversary is exactly a CCEG’s nonobvious link (standard or past) and can thus be directly exploited by a CCEG’s adversary. As a consequence, CCEG’s -untraceability (resp., -untraceability) property obviously implies JW’s -privacy (resp., --privacy). The reciprocal does not lead to a tight reduction. Indeed, a CCEG’s adversary may shuffle the tags’ pseudonyms several times (by performing successive DrawTag and Free queries), which are hard to simulate in the JW model.

The HPVP model defines privacy using the well-known “left-or-right” paradigm. As detailed in Section 10, it splits the tags space into two worlds. Nevertheless, a JW’s adversary can be simulated in this model. First the HPVP’s adversary draws each tag of the system. (A single tag can be given as the two inputs of the DrawTag oracle.) Then, the two selected challenge tags of JW are freed and given as input of the DrawTag oracle. If the JW’s adversary is able to recognize the outputted tag, then it may be used by an HPVP’s adversary to output the guessed bit. Here again, the reciprocal is not true for the same reasons as for the CCEG model.

As a conclusion, assuming that privacy is defined as indistinguishability of tags, the most comprehensive models are HPVP and CCEG. Intuitively, these two models have equivalent privacy notions. Indeed, an adversary that succeeds in the HPVP experiment can easily output a nonobvious link. On the opposite, a nonobvious link permits distinguishing one tag from the others and can thus be used in the “left-or-right” paradigm. However, it is not obvious to formally prove this equivalence result due to the following facts. Firstly, at one moment of the HPVP experiment, the adversary must use (at least once) the DrawTag oracle on two different tags in order to obtain information about the challenge bit. At that moment, this adversary can no longer interact with all the tags whereas a CCEG’s adversary can always interact with all the tags if it wants to. Secondly, a CCEG’s adversary may draw more than one tag in a DrawTag request (e.g., three tags out of four). If an HPVP’s adversary wants to use such an adversary as a subroutine to succeed in the HPVP experiment, the simulation of this fact entails that some choices are mandatory and thus leads to a nontight reduction.

13.2. Real World versus Simulated World

The last three models (i.e., Vaudenay, LBM, and DLYZ) define privacy as, in a nutshell, the unfeasibility to distinguish the interactions of an adversary against the real system from the interactions of a simulated adversary against a simulated world. In this second world, the simulator does not know the keys of the system. Nevertheless, when a tag corruption is asked, the tag’s real secret key is returned. The idea behind this privacy notion is that, if there exists a distinction between these two worlds, then some information must leak from the messages of the real world (which contains the real keys of the system).

The most adaptive and comprehensive model using this principle is clearly the Vaudenay model. First, this model offers the widest range of adversaries. Then, these adversaries can be adaptive, contrary to the ones of DLYZ. Finally, as explained in Section  12.1, the LBM model only ensures the privacy of authentications prior to the last complete one, while the Vaudenay model considers privacy of all the possible authentications. As a consequence, for equivalent adversary classes, the Vaudenay model is stronger than LBM and DLYZ.

From another point of view, the UC framework is generally used to analyze protocols that are not run alone, but in parallel/concurrency with other protocols. Here, the interesting feature is that the environment can interact with the system and thus may help to perform its attack, while Vaudenay’s adversary is on its own. This fact has been frequently used in the UC literature to prove that some “considered secure” constructions are indeed not. As a consequence, if the protocol to analyze is designed to belong to a complex system, its privacy may be studied in the LBM model. Nevertheless, if a strong privacy property is wished, the protocol should also be analyzed in the Vaudenay model.

13.3. Between the Two Families

The oracles description of the CCEG model is really close to the one of Vaudenay. The authors of the former describe their model as a restriction of the Vaudenay one, mainly on the experiment. Indeed, CCEG’s adversary is required to output a nonobvious link, while any adversary assumption can be output in the Vaudenay model. Consequently, CCEG’s privacy notion is intuitively weaker than Vaudenay’s one (for equivalent adversary). Nevertheless, as proved in [11], CCEG’s -untraceability is a reachable property while Vaudenay’s -privacy is impossible. Furthermore, to increase their result, CCEG’s authors also prove with a “toy scheme” that their -untraceability considers attacks that are not taken into account in the two “highest” reachable privacy levels of Vaudenay (i.e., the - and -privacy). As a consequence, the CCEG model defines a potentially weaker privacy notion, but, under this framework, protocol privacy can be studied against a stronger adversary than in the Vaudenay model.

Similar results may be proved for the HPVP model. First, its authors exhibit in their paper a protocol that ensures -privacy in their model. Then, using the “toy scheme” defined in [11], it can be proved that the same attacks (highlighted by CCEG) are also taken into account in HPVP’s -privacy, which are again not considered in the reachable privacy levels of Vaudenay. However, as for the CCEG model, it can be proved that Vaudenay’s privacy implies HPVP’s one for equivalent adversary class. As this final result is not intuitive, we prove it in Appendix C.

To conclude this discussion, we highlight some existing results about the DLYZ model. The authors of the original paper argue that JW’s -privacy does not imply -privacy and used several schemes to illustrate their claim. One example is a system composed of only one tag. Clearly, such a scheme cannot be analyzed in the JW model since it requires at least two tags in the experiment. Thus, their claim that the proposed scheme is -private is doubtful. Additionally, the argument claiming that this scheme is not -private is also not considered as acceptable, according to the authors of [42]. Furthermore, in such a special case of single-tag systems, DLYZ’s authors say that -privacy is reduced to the basic zero-knowledge definition which, according to them, provides a reasonable privacy. However in practice, each time this lonely tag is accepted by a reader, a adversary is obviously able to link this authentication to the previous ones. To our mind this is obviously a breach of privacy. Finally, the authors of [42] go one step beyond and formally prove that JW’s -privacy is equivalent to -privacy (Theorem 1 of [42]).

14. Conclusion

In this paper, we first presented eight of the most well-known existing privacy models for RFID in details. We exhibited and discussed the differences between these models regarding their features and their privacy notions. As a preliminary conclusion, none of the existing models encompass all the others. The first reason is that no model offers enough granularity to provide all the features detailed previously. Even if it is sometime possible to extend an existing model to take into account a new property or a new assumption, it is not always a trivial task to add all of them.

Throughout our study, it appears that the Vaudenay model is the one that integrates the greatest number of features and which defines the strongest privacy notion. As a default choice, the Vaudenay model is probably the best one. Nevertheless, some drawbacks have been highlighted. Firstly, the strongest privacy property of this model cannot be ensured by any protocol. To study the security of a protocol against the strongest (known) adversary, one may thus prefer the CCEG of the HPVP model. Secondly, the Vaudenay model (as other ones) considers that tracing a tag after an incomplete protocol execution compromises the privacy. On the one hand, this is a relevant consideration that ensures a strong privacy level. On the other hand, relaxing this constraint helps to design more efficient protocols with a still reasonable privacy level using the Avoine and LBM models. Finally, the lack of granularity of all the models involves difficulties to fairly distinguish, in a given model, protocols with different security levels.

If system designers have precisely defined the requested properties of their application and the assumptions regarding potential adversaries, then they might use our results to select the most appropriate model. Thereby, they can design or select the most adapted and efficient protocol for their needs. Nevertheless, we are convinced that unifying and simplifying the models would help the community to design and compare protocols meaningfully.

Appendices

A. General Statements about the UC Framework

A.1. The Environment

In the UC framework, ’s purpose is to manage the evolution of the system . In other words, this entity is in charge of the activation of all the parties, including the adversary . is the only entity able to request a party to initiate a new execution of the studied protocol. It is also able to read the output tapes of the system and ’s parties. On the other hand, is not assumed to read the incoming and outgoing messages of the parties during a protocol execution.

While this new entity is quite unusual compared to the other privacy models in RFID, it permits formalizing systems where there is an underlying communication structure which may be unknown to the adversary. In the other models, is in charge of the activation of the parties. As a consequence, if there exists an underlying activation sequence that is unknown to the adversary, it cannot respect it and thus may lose information that would help it to perform its attack. The potential activation scheduling performed by thus strengthens the power of the adversary.

A.2. The Real World

The system is composed of several honest parties that interact together through an protocol in order to achieve a well-defined objective.

An adversary is in charge of the communication channels: it can eavesdrop, modify, and schedule all the communication channels between the honest parties in an arbitrary way. may also be able to corrupt parties and obtain the full knowledge of their state. Corrupted parties are assumed to be totally controlled by afterwards.

and can be discussed in an arbitrary way. Consequently, if wants to, it can forward all the communications to . It can also ask to launch new executions of Ident. At the end of the experiment, may send its final output to which is the last activated entity of the system. Then, outputs an arbitrary string, denoted by Exec, which can be reduced to one bit as proved by Canetti in [31, 32].

A.3. The Ideal World

Here, all the honest parties have access to the ideal functionality , that is a trusted and uncorrupted party. must trivially ensure the desired security objectives of the protocol, and does not depend on any cryptographic mechanism.

Equivalently to the adversary in the real world, a simulated adversary is defined such that can arbitrarily discuss with . However, can no longer directly interact with parties: it can only communicate with the ideal functionality which manages all the entities’ communications. The main goal of is to reproduce the behavior of in the real world as faithfully as possible. Since (i) may transfer messages of the protocol to , (ii) does not have access to , and (iii) does not produce such messages, then should simulate these messages to . The final output of is denoted by Exec, where the protocol UC-realizes the ideal functionality (as defined in [31]).

B. Detailed Privacy Analysis of Five Protocols

In the following, and refer to pseudorandom functions, while and refer to one-way functions. (Enc/Dec) refers to an encryption scheme. Finally, denotes the security parameter of the system.

B.1. SK-Based Challenge/Response Authentication Protocol [37]

In this protocol, it is obvious that one single corruption of a tag allows it to be traced at any time. This is feasible as the secret key of a tag is a fixed value and the nonces used in the pseudorandom function are sent in the clear. Thus, an adversary is able to recompute the value for the corrupted tag and compare it with the previously sent one. If these values are equal, then the adversary is convinced that the corrupted tag performed this authentication. (Note that this equality can be due to a collision, but this happens with a negligible probability.) Nevertheless, the corruption of another tag does not help to trace the tag , since all the secret keys are independent. Consequently, this protocol can only reach privacy properties when the adversary is not allowed to corrupt the challenge tags.

Therefore this protocol is -RTE in the Avoine model (proved for this kind of protocols in [9]), and -private in the JW model (proved in [16]). It is untraceable for DMR (proved in [13]) and -private in the DLYZ model (the proof of a similar protocol in [12] can be trivially adapted).

This protocol is -private for Vaudenay (proved in [22]) and for HPVP. It is -untraceable for CCEG. The proofs for HPVP and CCEG are very similar to the ones of Vaudenay.

Finally, this protocol cannot UC-emulate the ideal functionality in the LBM model as the attack presented here permits an adversary to link several executions while this is not possible for the simulator (as state() is removed after a corruption).

B.2. Tree-Based Authentication Protocol [8]

In this protocol, the main drawback is that some partial keys are shared by several tags. For instance, let us first say that a random tag is chosen and corrupted: its secret keys are revealed. Then, let us define the tags and as follows: ’s keys are , and ’s keys are . Clearly, and share the same path for the first two nodes, since they have the same keys for and . But they have different keys for . From the keys revealed during ’s corruption, it is therefore possible to differentiate and : ’s answers will always be verifiable with , but this is not the case for since it does not use the revealed key . Note that, in the example, the challenge tags are not corrupted: only one other tag is corrupted.

Also, this protocol faces the same problem as the SK-based protocol: the corruption of a tag allows tracing it unconditionally. Thus for all the models, we consider that the adversary is not allowed to corrupt (at least) the challenge tags. Note that this option is not available in LBM, and this protocol is consequently not -secure in this model.

It should not be possible to study this kind of protocols in the Avoine model because of the correlated secrets, but the analysis is given here to show the contrasts between the different models. Thus in the Avoine model, since only plays with the two challenge tags, the protocol does not suffer from the previous attack. Therefore, the protocol is -RTE (same proof as for the SK-based protocol). For Vaudenay and HPVP, the protocol is -private, and -untraceable for CCEG: clearly, since no secret is revealed, the proof is similar to the one for an SK-based protocol.

Then is able to corrupt the nonchallenge tags in JW, and the tags that are not part of its attack in DMR. Thus, the attack presented above can be formalized in these two models. Consequently, the protocol is not -private for JW (explained in [16] and proved in [6, 43]) and not untraceable for DMR.

For DLYZ, we use the method provided in [12] to show that the protocol is not -private. We consider that runs as subroutine the underlying adversary . just runs basically , and both adversaries obtain several keys from the corruption of nonclean tags in the first phase. Let us also consider that and return a set of clean tags where (i) and (ii) each tag in can be easily recognizable, thanks to the revealed keys. Then will be able to recognize the challenge tag. But, does not know which challenge tag has been chosen. Thus has to choose at random a tag to simulate. At the end of the experiment, will always retrieve the correct challenge tag, contrary to : the views of and will be distinguishable. Therefore, the protocol is not -private.

B.3. OSK-Based Authentication Protocol [22]

A significant attack on this kind of protocols has been defined by Juels and Weis in their privacy model [16], based on the fact that a tag’s key can be updated while the equivalent one stored by the reader is not. Note that upon receipt of a message , tries to find a match with all tags’ keys and their first updates. Thus, if the adversary sends more than consecutive authentication requests to a tag without transferring the answers to , the shared secrets stored in and are consequently desynchronized. Therefore, if has access to the authentication result on the reader’s side, it is able to recognize a desynchronized tag from another random tag as will be rejected. This attack is generally called a desynchronization attack.

Recall that a adversary does not have access to the authentication result on the reader’s side, while a one does have this access (e.g., through a Result query).

Considering a adversary, under the one-wayness assumption of , it is obviously infeasible to link a secret key to a previous authentication transcript as this is equivalent to invert . Furthermore, since all tags’ secrets are independent, then corrupting one tag does not allow tracing the other ones. Since is restricted to be in the Avoine and DMR models, the desynchronization attack does not work and thus the security level is equivalent to the one of the SK-based protocol (Figure 1), namely, the protocol is, respectively, -RTE (proved in [9]) and untraceable (proof similar to the one in [13]). Considering tag corruption, it is furthermore -RTEC in the Avoine model (proved in [9]). Regarding the Vaudenay and HPVP models, the protocol is --private (proved in [15, 22]).

When is , the protocol is vulnerable to the desynchronization attack explained above. Therefore, the protocol is not -private for JW when (proved in [16]), and not -untraceable for CCEG. In the LBM model, a legitimate tag cannot be rejected in the ideal world as the ideal functionality will always accept it, while the desynchronization attack works in the real world.

For DLYZ, the same problem as for the tree-based protocol appears. If and one of the two tags has been desynchronized by , then can distinguish these tags depending on the result of an execution in the second phase. But does not know which challenge tag has been chosen. Thus has to choose at random a tag (victim or not of the desynchronization attack) to simulate. At the end of the experiment, is always able to retrieve the correct challenge tag, which is not the case of . This implies that the views of and will be distinguishable. Therefore, the protocol is not -private, because at least one adversary can produce a distinguishable view (Figure 3).

B.4. O-FRAP Authentication Protocol [18]

The procedure is detailed in Algorithm 1 where works as follows. First, if uses to identify , then replaces the content of with the one of . Secondly, refreshes by .

Input:
Output:
(1)
(2) if   ( , , ) s.t. (resp. ) then
(3)       (resp. )
(4)      if   then
(5)          is correctly authenticated
(6)         
(7)          ( )
(8)      end if
(9) end if
(10) for all ( , , )   and   do
(11)      
(12)       if     then
(13)           is correctly authenticated
(14)          
(15)           ( )
(16)      end if
(17) end for
(18) return  

Avoine et al. describe in [28] an attack which works when the adversary is able to corrupt the challenge tag. This attack can be applied to the undesynchronizable protocols presented in [18, 24, 39]. First, makes and start a new protocol execution, but blocks the last message sent from to . Then, if corrupts directly after this incomplete execution, it is able to recognize by recomputing as has not been updated and the nonces have been sent in the clear. Note that the traceability attack of O-FRAP presented in [44] is specific to the way they define Algorithm 1 and does not apply here.

Therefore, no Corrupt query is allowed to an adversary of this protocol. In that case, the desynchronization attack of OSK does not work here. As a consequence, for JW, Vaudenay, CCEG, and HPVP, the privacy level of O-FRAP is the same as the one of the SK-based protocol (proofs are equivalent): it is, respectively, -private, -private, -untraceable, and -private.

In the Avoine and DMR models, the protocol is -RTE and untraceable: the attack presented above without corruption does not work since the tags’ keys are needed. The proofs are thus similar to the ones of the SK-based protocol. The protocol is furthermore -RTEC for Avoine, because, in that case, can give nonconsecutive intervals (contrary to the ones needed for the above attack): thus corrupting a tag does not help to trace a tag.

Since the analysis for LBM is only related to completed protocol executions, this attack can be perfectly simulated in the ideal world using the knowledge of as proved in [18]. The protocol is thus -secure.

For DLYZ, the protocol is -private: the proof is similar to the one of the SK-based protocol when no corruption is allowed. Regarding the --privacy, it is possible to define an adversary that has a distinguishable view than the simulator’s one. Let us consider that . just runs as subroutine. Then forces an interaction between and and blocks the last message. has to provide a simulated incomplete interaction of with : since does not have any information about , this interaction can only be composed of random messages. At the end, ’s secrets are revealed to a distinguisher . Thus is able to recognize if ’s interaction corresponds to a real incomplete interaction with or a simulated one. The protocol is therefore not --private (Figure 4).

B.5. PK-Based Challenge/Response Authentication Protocol [22]

First, it is important to note that, under IND-CPA security, this protocol may not be easily proved private for adversaries in any model. The main reason is that the simulator/blinder in the proof does not have access to a decryption oracle in the IND-CPA experiment. Therefore, this simulator/blinder is unable to correctly simulate the Result oracle and thus has to answer at random 0 or 1 in some cases. Here, an adversary may be able to detect if it is interacting with the real world or with a simulated one. CCEG proves that -untraceability can nevertheless be reached by PK-based protocols using IND-CPA cryptosystem but by adding other security mechanisms to the protocol (i.e., a MAC scheme).

For Avoine and DMR, since is , this problem does not appear (i.e., no query to Result). When the cryptosystem is IND-CPA secure, the protocol is thus -RTE and -RTEC for Avoine, and untraceable for DMR.

The proof is as follows in the Avoine model but can be easily adapted for the DMR model. We show that, if there exists an adversary that wins (with ), then it is possible to construct an adversary that wins the IND-CPA game. To do so, runs as subroutine, simulating the system to by answering all oracles queries made by . At the end of the IND-CPA game, answers what answers for . Here, knows the secrets of and at the beginning of the IND-CPA game, in order to perform it. When asks the interactions for and , answers the corresponding ciphertexts for these interactions using the correct plaintext. When asks the interactions for , then submits the plaintexts for both and for these interactions to the IND-CPA challenger . receives the ciphertexts answered by for , where is the unknown bit of the IND-CPA experiment, and transfers them to . So far, the simulation done by to is perfect. Then, two cases can occur. (1) does not need ’s secrets (i.e., is playing the experiment). wins , thus its advantage is nonnegligible, so is the advantage of . (2) asks ’s secrets (i.e., is playing the Forward experiment). does not know , thus it sends at random ’s or ’s secrets. If sends the expected ones, then wins , thus its advantage is nonnegligible, so is the advantage of . If not, at worst answers at random 0 or 1. Therefore, the whole advantage of is nonnegligible, so is the advantage of .

Consequently, is an adversary that wins the IND-CPA game with nonnegligible advantage, which concludes the proof.

Vaudenay proves in [22] that the protocol is --private with IND-CPA security and that it is furthermore -private with IND-CCA. Since the privacy notions of JW are included in Vaudenay (as explained in Section 13), the protocol is thus --private for JW. HPVP proves in [15] that the protocol is also --private with IND-CPA security but that it is -private with IND-CCA.

In the LBM model, if an environment is able to distinguish the real world from the ideal one, it can easily be transformed into a distinguisher of the IND-CCA property of the underlying encryption scheme. Thus it is obvious that this protocol is -secure.

In the CCEG model, the protocol is -untraceable with IND-CCA security (proved in [11]). In the DLYZ model, the protocol is also --private with IND-CCA security: the proof follows the same reasoning as the one of CCEG (Figure 5).

C. The Vaudenay Model Implies the HPVP Model

The following theorem proves that, for a given adversary class, the privacy property of the Vaudenay model is at least stronger than the one of HPVP.

Theorem 19. For any adversary class , , , , then the -privacy property of the Vaudenay model implies the -privacy property of the HPVP model.

Proof. Both models define the same adversary classes but differ in their experiment. However, we show here that, for a given class , Vaudenay’s -privacy implies HPVP’s one. To do so, we exhibit an adversary in Vaudenay, denoted , that emulates the system to an adversary playing the HPVP’s -experiment, denoted , and uses the output of the latter to break Vaudenay’s -privacy.
First, can answer all the possible queries performed by during its experiment. The SendTag, SendReader, Result, CreateTag, and Launch queries can be easily emulated by due to their large similarity. For the DrawTag oracle, the Vaudenay model should be slightly modified in order to emulate the one of HPVP. Indeed in HPVP, this oracle formalizes the “left-or-right” paradigm. To handle this issue, we assume that, when gives as input of DrawTag a probability distribution with the form “,” then this also follows the “left-or-right” paradigm as well.
Also, can only corrupt tags while only tags can be corrupted in the Vaudenay model. Nevertheless, can correctly reply to these queries: upon a corruption query of the tag , draws using a special distribution probability which attribute a probability of 1 to and 0 for all the other tags. Then, it can corrupt it, transmits the data to , and then frees . This method correctly works for and adversaries (and their variants). However, it must be adapted for a adversary. Indeed, in both models, such an adversary can only perform corrupt queries after that the first one has been made, and must anticipate all these possible queries of . Thus, upon the first corruption query, first frees all tags and then draws them one by one in order to know the correspondences between all the tags identifiers and their pseudonyms. Finally, is able to reply to all the corruption queries correctly.
This simulation is perfect and cannot be detected by that, as a consequence, will output its guessed bit with its habitual probability. Then, using this bit, can decide which tag has been drawn by the DrawTag queries. Therefore, the success probability of is exactly the one of . As Vaudenay’s blinder cannot decide in advance which tag should be simulated after a DrawTag, the success probability of this blinded adversary is necessary one half (random guess of the bit).
Thus, if there exists an attack for a given system against the -privacy in HPVP, then there exists an attack against the -privacy that succeeds with the same probability in the Vaudenay model. Therefore, for any adversary class , Vaudenay’s -privacy implies HPVP’s one.

The reciprocal is hard to prove for two main reasons. Firstly, Vaudenay’s experiment output is not specified and may thus be unexploitable by . Secondly, the DrawTag oracle may receive as input an arbitrary distribution that can be hard to simulate using the “left-or-right” DrawTag of HPVP.

Acknowledgment

This work was partially funded by the Walloon Region Marshall plan through the 816922 Project SEE.