Journal of Computer Networks and Communications

Journal of Computer Networks and Communications / 2013 / Article
Special Issue

Privacy and Security in Wireless Sensor Networks: Protocols, Algorithms, and Efficient Architectures

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 710275 | https://doi.org/10.1155/2013/710275

Iwen Coisel, Tania Martin, "Untangling RFID Privacy Models", Journal of Computer Networks and Communications, vol. 2013, Article ID 710275, 26 pages, 2013. https://doi.org/10.1155/2013/710275

Untangling RFID Privacy Models

Academic Editor: Agusti Solanas
Received25 May 2012
Accepted24 Jul 2012
Published11 Feb 2013

Abstract

The rise of wireless applications based on RFID has brought up major concerns on privacy. Indeed nowadays, when such an application is deployed, informed customers yearn for guarantees that their privacy will not be threatened. One formal way to perform this task is to assess the privacy level of the RFID application with a model. However, if the chosen model does not reflect the assumptions and requirements of the analyzed application, it may misevaluate its privacy level. Therefore, selecting the most appropriate model among all the existing ones is not an easy task. This paper investigates the eight most well-known RFID privacy models and thoroughly examines their advantages and drawbacks in three steps. Firstly, five RFID authentication protocols are analyzed with these models. This discloses a main worry: although these protocols intuitively ensure different privacy levels, no model is able to accurately distinguish them. Secondly, these models are grouped according to their features (e.g., tag corruption ability). This classification reveals the most appropriate candidate model(s) to be used for a privacy analysis when one of these features is especially required. Furthermore, it points out that none of the models are comprehensive. Hence, some combinations of features may not match any model. Finally, the privacy properties of the eight models are compared in order to provide an overall view of their relations. This part highlights that no model globally outclasses the other ones. Considering the required properties of an application, the thorough study provided in this paper aims to assist system designers to choose the best suited model.

1. Introduction

Radio Frequency IDentification (RFID) is a technology that permits identifying and authenticating remote objects or persons without line of sight. In a simple manner, a tag (i.e., a transponder composed of a microcircuit and an antenna) is embedded into an object and interacts with a reader when it enters within its electromagnetic field. The first use of RFID goes back to the early 1940s, during World War II, when the Royal Air Force deployed the IFF (Identify Friend or Foe) system to identify the Allies airplanes. Today, RFID is more and more exploited in many domains such as library management, pet identification, antitheft cars, anticounterfeiting, ticketing in public transportation, access control, or even biometric passports. It thus covers a wide ranging of wireless technologies, from systems based on low-cost tags (such as EPCs [1]) to more evolved ones operating with contactless smartcards [2, 3].

As predictable, some problems come up with this large-scale deployment. One general assumption of RFID systems is that the messages exchanged between the tags and the readers can easily be eavesdropped by an adversary. This raises the problem of information disclosure when the data emitted by a tag reveal details about its holder (called “information leakage”), but also when the eavesdropping of communications allows tracking a tag at different places or times (called “malicious traceability”) and consequently its holder. Many articles pointed out the dangers of RFID with respect to privacy, and the authorities are now aware of this problem. For instance, Ontario Information and Privacy Commissioner Cavoukian aims to advocate the concept of “privacy-by-design” [4] which states that privacy should be put in place in every IT system before its widespread use. In 2009, the European Commissioner for Justice, Fundamental Rights and Citizenship issued a recommendation [5] which strongly supports the implementation of privacy in RFID-based applications.

Various researches have emerged these last years to fight against information leakage and malicious traceability in RFID. However, the search for a generic, efficient, and secure solution that can be implemented in reasonably costly tags remains open [68]. Solutions are usually designed empirically and analyzed with ad hoc methods that do not detect all their weaknesses. In parallel, many investigations have been conducted to formalize the privacy notion in RFID. In 2005, Avoine was the earliest researcher to present a privacy model [9]. Since then, many attempts [1022] have been carried out to propose a convenient and appropriate privacy model for RFID. But each one suffers from distinct shortcomings. In particular, most of these models generally do not take into account all the alternatives that a power may offer to an adversary. For instance, when an adversary is allowed to corrupt a tag, then several possibilities may arise: a corrupted tag could be either destroyed or not, and, in the last case, this tag could still be requested to interact within the system. At Asiacrypt 2007, Vaudenay introduced the most evolved RFID privacy model [22] known so far. However, this model is not as convenient as some protocol designers may expect, and they sometimes prefer to use a less comprehensive model to analyze a system. Consequently, providing an analysis and a comparison of the major RFID privacy models is meaningful to help designers in their choice. Such a work aims to highlight the strengths and weaknesses of each model. Su et al. already achieved a similar work in [23]. Unfortunately, they only focused on privacy notions and did not consider all the subtleties that are brought by different models. As a consequence, their study considers some models as weak, even though they offer interesting properties.

Our contribution is threefold. Firstly, in Sections 3 to 10, we chronologically present eight well-known models designed to analyze identification/authentication protocols preserving privacy. Some of them are very popular like [9, 16, 22]. Other ones have interesting frameworks like [12, 13, 18] (e.g., [18] is derived from the well-known universal composability framework). Other alternative models are attractive successors of [22], such as [11, 15]. Secondly, in Section 11, we analyze five different authentication protocols with each of these models in order to exhibit the lack of granularity of the state of the art. Finally, in Sections 12 and 13, we thoroughly compare the eight models regarding their different features and their privacy notions. We show that none of these models can fairly analyze and compare protocols. This fact is especially undeniable when the system’s assumptions (that can differ from one system to another) are taken into account for an analysis.

2. Common Definitions

In this section, we give all the common definitions that are used in the presented privacy models.

2.1. The RFID System

For all the privacy models, an RFID system is composed of three kinds of entities: tags, readers, and a centralized database. It is generally considered that the database and the readers are connected online all together through a secure channel, and therefore they form one unique entity, the reader.

We denote as a tag, as the reader, and as the reader’s database. A tag is able to communicate with when it enters into ’s electromagnetic field. Then both reader and tag can participate together to an RFID protocol execution . This protocol can be an identification or an authentication protocol. We define an -pass RFID protocol as being a protocol where messages are exchanged between and .

The reader is a powerful transceiver device whose computation capabilities approach the ones of a small computer. A tag is a transponder with identifier . Its memory can vary from a hundred of bits (as for EPC tags [1]) to a few Kbytes (such as contactless smartcards [2, 3]). Its computation capabilities are generally much lower than a reader, but, depending on the tag, it can perform simple logic operations, symmetric-key cryptography, or even public-key cryptography. A tag is considered as legitimate when it is registered in the database as being an authorized entity of the system. The database stores, at least, the identifier and potentially a secret of each legitimate tag involved in the system.

2.2. Basic Definitions

First, we define as the security parameter of the system and as a polynomial function. Thus, we define as being a negligible function in if, for every positive function , there exists an integer such that, for all , .

Then, we define all the different entities that may play a role in the presented privacy models. An adversary is a malicious entity whose aim is to perform some attacks, either through the wireless communications between readers and tags (e.g., eavesdropping), or on the RFID devices themselves (e.g., corruption of a device and obtaining all the information stored on it). The adversary advantage is the success measure of an attack performed by . In some models, is requested to answer to a kind of riddle, which is determined by an honest entity, called challenger . A challenge tag is a tag which is suffering from an attack performed by . It can be chosen either by or by .

Generally, a modelization with oracles is used to represent the possible interactions between and the system. Thus, carries out its attack on the system, performing some queries to the oracles that simulate the system. The generic oracles used in the presented privacy models are detailed in Section  2.4.

We consider that is able to play/interact with a tag when this last one is in ’s neighborhood. At that moment, the tag is called by its pseudonym (not by its identifier ). During an attack, if a tag goes out and comes back to ’s neighborhood, then it is considered that its pseudonym has changed. This notion is detailed in the Vaudenay model [22] (see Section 5). The same case happens when a set of tags is given to the challenger : when gives the tags back to , their pseudonyms are changed.

2.3. Procedures

Most of the models studied in this paper focus on an RFID system based on an anonymous identification protocol implying a single reader and several tags. The system is generally composed of several procedures, either defining how to set up the system, the reader, and the tags, or defining the studied protocol. One way to define these procedures is detailed in the following. Note that this is just a generalization but it may be different in some models. (i) defines ’s parameters (e.g., generating a private/public key pair ) depending on the security parameter . It also creates an empty database which will later contain, at least, the identifiers and secrets of all tags. (ii) returns , that is, the secret of the tag with identifier .   is stored in the database of the reader. (iii) is a polynomial-time interactive protocol between the reader and a tag , where ends with a private tape . At the end of the protocol, the reader either accepts the tag (if legitimate) and , or rejects it (if not) and .

2.4. The Generic Oracles

An adversary is able to interact/play with the system with the following oracles. First, it can setup a new tag of identifier .(i)CreateTag creates a tag with a unique identifier . It uses to set up the tag. It updates , adding this new tag.

can ask for a full execution of the protocol on a tag . (i)Execute executes an protocol between and . It outputs the of the protocol execution , that is the whole list of the successive messages of the execution .

Also, it can decompose a protocol execution, combining the following oracles. (i)Launch makes start a new protocol execution . (ii)SendReader sends a message to in the protocol execution . It outputs the response of the reader. (iii)SendTag sends a message to . It outputs the response of the tag.

Then, can obtain for the reader’s result of a protocol execution . (i)Result: when is completed, it outputs if , and otherwise.

And finally, it can corrupt a tag in order to recover its secret. (i)Corrupt returns the current secret of .

If the conditions of the oracles’ uses are not respected, then the oracles return . Note that these definitions are generic ones. Some models do not use exactly the same generic oracles: in those cases, some refinements will be provided on their definitions.

3. Avoine [9], 2005

In 2005, Avoine proposed the first privacy model for RFID systems. The goal was to analyze the untraceability notion of 3-pass protocols following the idea of communication intervals: the adversary asks some oracles’ queries on specific intervals of the targeted tags lives. The privacy notion behind this model represents the unfeasibility to distinguish one tag among two.

3.1. The Oracles

This model considers that each tag has a unique and independent secret, and that, at the initialization of the system, already stores all the tags’ secrets, that is, a has already been performed on every tag.

Then has only access to the following modified generic oracles adapted for 3-pass protocols. Instead of using the entities’ names, Avoine uses the protocol executions names. Since and can run several protocol executions, (resp., ) denotes the (resp., ) execution of (resp., ). These notations favor the precise description of ’s and ’s lifetimes.(i)SendTag sends a request to , and then sends the message after receiving ’s answer . This is done during the execution of .(ii)SendReader sends the message to in the protocol execution . It outputs ’s answer .(iii)Execute executes a whole execution of the protocol between and . This is done during the execution of and the execution of . obtains the whole .(iv)Execute* this is the same as the normal Execute. But it only returns the -, that is, the messages sent by .(v)Corrupt: returns the current secret of when the tag is in its execution.

The goal of the Execute* oracle is to simulate the fact that the forward channel (from reader to tag) has a longer communication range than the backward channel (from tag to reader) and therefore can be easily eavesdropped. It formalizes the asymmetry regarding the channels.

Two remarks are of interest for the Corrupt oracle. First, Corrupt can be used only once by . After this oracle query, cannot use the other oracles anymore. Second, Corrupt is called on the tag execution number, and not the tag itself. This allows to specify exactly the targeted moment of the tag’s life.

During its attack, has access to the oracles = {SendTag, SendReader, Execute, Execute*, Corrupt}.

Avoine denotes as being the result of an oracle query on : therefore SendTag, Execute, Execute*, Corrupt. Avoine defines an interaction as being a set of executions on the same tag during an interval when can play with . Formally, SendReader, where . By this definition, the length of is .

Avoine also defines a function Oracle which takes as parameters a tag , an interval , and the oracles , and which outputs the interaction that maximizes ’s advantage.

3.2. Untraceability Experiments

Avoine defines two experiments to represent two untraceability notions. They depend on and , which represent, respectively, a reference length and a challenge length and which are function of the security parameter .

The first experiment given in Box 1 works as follows. First, receives the interactions of a tag during an interval that it chooses. Then, it receives the interactions of the challenge tags and , also during the intervals and that it chooses, such that or . This last information is unknown to . Additionally here, none of these two intervals and cross the interval of . At the end, has to decide which one of the challenge tags is the tag .

The second experiment given in Box 2 has the same mechanism. The only difference is that, now, is the one that chooses the intervals and of the challenge tags, and not anymore.

3.3. Untraceability Notions

From the experiments defined above, the notions of and are extended in this model, depending on restrictions about the choices of and . is when chooses and , whereas is when chooses them. Then, if (resp., ), that means and take place after (resp., before) , with respect to the lifetime of the system. (i)If (resp., ) chooses and such that , then it is denoted (resp., ). (ii)If (resp., ) chooses and such that , then it is denoted (resp., ).

The notion of when the Corrupt oracle is used is called .

Definition 1 (untraceability [9]). An RFID system is said to be - (for , , ) if, for every adversary ,

Direct implications are made from these notions:

4. Juels and Weis [16], 2007

Two years after Avoine’s publication, Juels and Weis proposed a new privacy model, referred in the sequel as JW, based on indistinguishability of tags. It intended to analyze classical challenge/response protocols based on symmetric-key cryptography (with possible additional messages in order to update the tags keys).

In their article, the authors highlighted that the Avoine model lacks two important features. Firstly, they proved that it is unable to catch an important attack on systems where tags have correlated secrets, because Avoine’s adversary can only play with two tags. Secondly, they showed that Avoine did not have hindsight regarding all the possible attacks that can be performed on a protocol. The Avoine model does not capture all the relevant information that can be extracted from a protocol execution. For instance, it does not consider that has access to any execution result. However, this simple “side information bit” allows formalizing a special kind of attacks on desynchronizable protocols like OSK, as explained in Appendix  B.3. and in [24]. Therefore, the JW model aimed to fill that gap.

4.1. Oracles

At the initialization of the system, already stores all the tags’ content, that is, a has already been performed on every tag. Then has access to the generic oracles Launch SendTag and SendReader, with the difference that the Output of SendReader includes the output of Result. It has furthermore access to the following oracles. (i)TagInit: when receives this query, it begins a new protocol execution and deletes the information related to any existing execution. (ii)SetKey: when receives this query, it outputs its current key and replaces it by a new one, .

The SetKey oracle is equivalent to the Corrupt oracle given in Section  2.4 in the sense that it reveals to the tag’s current key. Note that its use and its result have an interesting feature: is able to put any new key in the targeted tag: either the revealed one or a random one (that can be illegitimate).

4.2. Privacy Experiment

Let , , and be, respectively, the numbers of Launch, computation steps (represented by the SendReader and SendTag queries), and TagInit that are allowed to . Let be the total number of tags involved in the system . The privacy experiment is given in Box 3.

4.3. Privacy Notions

From the previous experiment, the JW model defines the following privacy property, where , , and can be function of the system security parameter .

Definition 2 (-privacy [16]). A protocol initiated by in an RFID system with security parameter is -private if, for every adversary ,

Considering a variant of experiment where the “except ” is removed from step (6.b), then --privacy can be defined in the same way as the previous definition.

Note that, if uses SetKey to put an illegitimate key in a tag, then this last one will possibly no longer be authenticated successfully by the reader. Nevertheless, whether this is performed on the nonchallenge tags or on (only for the --privacy experiment), this does not help to find more easily the bit and thus does not influence its success to win the experiment.

5. Vaudenay [22], 2007

Later the same year, Vaudenay proposed formal definitions for RFID systems and adversaries and considered that a system can be characterized by two notions: security and privacy. In this paper, we only present the privacy notion. Vaudenay’s article followed some joint work done with Bocchetti [25], and its goal was to propose a comprehensive model that can formalize a wide range of adversaries. This characteristic is missing in the previous models and turns to be an asset of the Vaudenay model.

This model defines tags with respect to the adversary possibility to interact with them, as explained in Section  2.2. Clearly, when a tag is within ’s neighborhood, it is said to be and has a pseudonym so that is able to communicate with the tag. In the opposite situation, a tag is said to be (i.e., not ), and cannot communicate with it. Consequently, the model considers that, at any given time, a tag can be either or . For example, the same tag with identifier which is drawn, freed, and drawn again has two pseudonyms: sees two different tags. Additionally, all the tags may not be accessible to during all the attack: for instance, may only play with two () tags during its attack.

5.1. Oracles

Contrary to the other previous models, is empty at the initialization of the system. Then has access to all the generic oracles defined in Section  2.4. The only modification done on these ones is that can create a fake tag with CreateTag. In that case, no information related to this tag is stored in . It can also query the following ones.(i)DrawTag: following the distribution probability (which is specified by a polynomially bounded sampling algorithm), it randomly selects tags between all the existing (not already ) ones. For each chosen tag, the oracle assigns to it a new pseudonym, denoted , and changes its status from to . Finally, the oracle outputs all the generated temporary tags in any random order. If there is not enough tags (i.e., less than ), or tags already , then the oracle outputs . It is further assumed that this oracle returns bits () telling if each of the tags is legitimate or not. All relations are kept in an a priori secret table denoted .(ii)Free moves the tag from the status to the status . is unavailable from now on.

5.2. Privacy Experiment

From the oracles given above, Vaudenay defines five classes of polynomial-time adversary, characterized by ’s ability to use the oracles.

Definition 3 (adversary class [22]). An adversary class is said to be (i) if has access to all the oracles; (ii) if cannot use anymore a “corrupted” tag (i.e., the tag has been destroyed);(iii) if can only use the Corrupt oracle after its first query to the Corrupt oracle; (iv) if has no access to the Corrupt oracle; (v) if has no access to the Result oracle.

Remark 4. The following relation is clear: .

Note that the notion is the contrary to the one. If an adversary is not said to be , then nothing is said, but the term is implicitly meant.

Vaudenay’s privacy experiment is given in Box 4. is the adversary class, .

5.3. Privacy Notions

To define the privacy property of Vaudenay, it is first needed to define the notions of blinder (i.e., an algorithm able to simulate the answers of some specific oracles) and trivial adversary (i.e., an adversary that learns nothing about the system).

Definition 5 (blinder, trivial adversary [22]). A blinder for an adversary is a polynomial-time algorithm which sees the same messages as and simulates the Launch, SendReader, SendTag, and Result oracles to . does not have access to the reader tapes, so it does not know the secret key nor the database.
A blinded adversary is itself an adversary that does not use the Launch, SendReader, SendTag, and Result oracles.
An adversary is trivial if there exists a blinder such that

Definition 6 (privacy [22]). The RFID system is said to be -private if all the adversaries which belong to class are trivial following Definition 5.

The implications between Vaudenay’s privacy notions are as follows:

The main result of Vaudenay is that -privacy is impossible, by proving that a -private protocol is not --private. However, Vaudenay does not define which privacy level should be targeted by a protocol: it is never specified if --privacy is better or not than -privacy.

Also, it is not explicit how the blinded adversary operates. Basically, there are two options: (i) aims the same probability than , or (ii) aims the same behavior than . It is obvious that the first option allows proving the privacy of some protocols which are actually not private, but this should be correctly formalized.

5.4. Extensions of the Model
5.4.1. Model [21], 2008

Paise and Vaudenay extended the Vaudenay model to analyze mutual authentication protocols. Actually, they enriched the definition of the RFID system by introducing an output on the tag side: either the tag accepts the reader (if legitimate) and outputs , or rejects it (if not) and outputs . This formalizes the concept of reader authentication. Nevertheless, their extension does not modify the core of the Vaudenay model.

They also showed an important impossibility result: if the corruption of a tag reveals its entire state (and not only its secret ), then no RFID scheme providing reader authentication is --private. To counter this issue, they claimed that the temporary memory of a tag should be automatically erased as soon as the tag is put back as . However, this idea is not formalized in the paper.

This division between the persistent and the temporary memory of a tag has also been investigated by Armknecht et al. [26]. Based on the work of Paise and Vaudenay, they showed several impossibility results in attack scenarios with special uses of tag corruption.

5.4.2. Model [20], 2011

Ouafi presented in his thesis an adaptation of the Vaudenay model in order to counter Vaudenay’s impossibility result of -privacy. Concretely, the author proposed to incorporate the blinder with the adversary, so that the blinder has the knowledge of all the random choices and incoming messages made by the adversary. With this new definition of the blinder, Ouafi proved that -privacy can be ensured. This result is demonstrated with a public-key-based authentication protocol where the encryption scheme is IND-CCA2 secure and PA1+ plaintext-aware. (More details about these security notions can be found in [27].)

5.4.3. Other Extensions

The Vaudenay model has also been broadened in different works. In a nutshell, this is generally performed via the addition of a new oracle to the adversary capabilities (e.g., Timer in [28], MakeInactive in [29], or DestroyReader in [30]) and the corresponding new adversary class (e.g., the class when is allowed to use Timer).

6. Van Le et al. [10, 18], 2007

Also in 2007, van Le et al. introduced a privacy model in [18] (and an extended version in [10]) that is derived from the universal composability (UC) framework [31, 32] (and not on the oracle-based framework). Their aim was to provide security proofs of protocols under concurrent and modular composition, such that protocols can be easily incorporated in more complex systems without reanalyses. Basically, the model, denoted LBM in the following, is based on the indistinguishability between two worlds: the real world and the ideal one.

The transposition of RFID privacy into such a framework is a great contribution since universal composability is considered as one of the most powerful tools for security, especially when composition among several functionalities is required.

6.1. UC Security

General statements about the UC framework are briefly detailed in Appendix A for the reader nonfamiliar with the field. Here, we present the security notion provided in such a framework.

To prove that an protocol is as secure as the corresponding ideal functionality , no environment should distinguish if it is interacting with the real adversary and (i.e., the real world), or with the simulated adversary and (i.e., the ideal world). Consequently, must be well defined such that all the targeted security properties are trivially ensured. Canetti formally defines this concept in [31] as follows, where PPT denotes probabilistic polynomial time turing machine.

Definition 7 (UC-emulation [31]). A protocol UC-emulates a protocol if, for all PPT adversary , there exists a PPT simulated adversary such that, for all PPT environment , the distributions Exec and Exec are indistinguishable.

Based on this security framework, van Le et al. designed in [10, 18] several ideal functionalities to formalize anonymous authentication as well as anonymous authenticated key exchange.

6.2. Description of the LBM Model

The advantage of using this UC-based model is that all the possible adversaries and environments are considered during the security proof that can be carried out with LBM. In this paper, we only focus on the -security objective led by anonymous authentication.

6.2.1. Assumptions of an RFID System

First, the LBM model establishes that the reader is the only entity that can start a protocol execution. Then, it considers that only tags can be corrupted by an adversary . Upon corruption of a tag, obtains its keys and all its persistent memory values.

6.2.2. The LBM Ideal Functionality

This ideal functionality represents the anonymous authentication security objective of a given protocol. To do so, several parties (at least and one tag) may be involved in a protocol execution. Two parties and are said to be feasible partners if and only if they are, respectively, and a tag. In the ideal world, communication channels between tags and are assumed to be anonymous (meaning that they only reveal the type type() of a party, either tag or reader), and a sent message is necessarily delivered to the recipient. Finally, state() is the list of all the execution records, and active() is the list of all the preceding incomplete executions (Box 5).

6.2.3. Forward-Security

When the adversary corrupts a tag , it gets its identifier and is then able to impersonate this tag using the Impersonate command. A corrupted tag is thereafter considered as totally controlled by the adversary. Consequently, will no longer manage the behavior of this corrupted tag and thus will reject every Initiate command from this tag. As state() is removed after a corruption, the adversary is not able to link the related tag to its previous authentication.

However, the adversary is able to link all the incomplete protocol executions of a corrupted tag up to the last successfully completed one, based on the knowledge of active(). Thus, the ideal functionality obviously provides -security for all previous completed protocol executions.

7. Van Deursen et al. [13], 2008

The model of van Deursen et al., published in 2008, defines untraceability in the standard Dolev-Yao intruder model [33]. The untraceability notion is inspired by the anonymity theory given in [34, 35] and is used as a formal verification of RFID protocols. Such a technique is based on symbolic protocol analysis approach (and not on the oracle-based framework). This model will be called DMR in what follows.

7.1. Definition of the System

We remind below the basic definitions given in DMR.

First, the system is composed of a number of agents (e.g., Alice or Bob) that execute a security protocol, the latter being described by a set of traces. A security protocol represents the behavior of a set of roles (i.e., initiator, responder, and server), each one specifying a set of actions. These actions depict the role specifications with a sequence of events (e.g., sending or reception of a message). A role term is a message contained in an event, and it is built from basic role terms (e.g., nonces, role names, or keys). A complex term is built with functions (e.g., tupling, encryption, hashing, and XOR).

Each trace is composed of interleaved runs and run prefixes, denoted subtraces. A run of a role is a protocol execution from ’s point of view, denoted , where is a (possibly unique) run identifier. Thus, a run is an instantiation of a role. A run event is an instantiation of a role event, that is an instantiation of an event’s role terms. A run term denotes an instantiated role term. A run prefix is an unfinished run.

An adversary is in the Dolev-Yao model and is characterized by its knowledge. This knowledge is composed of a set of run terms known at the beginning, and the set of run terms that it will observe during its attack. The adversary is allowed to manipulate the information of its knowledge to understand terms or build new ones. However, perfect cryptography is assumed (i.e., cryptographic primitives are assumed unbreakable and considered as black boxes). The inference of term from term set is denoted by .

Corrupted agents are modeled. (Note that, regarding corruption, there is no restriction about the role of such an agent: it can be either a tag or a reader.) is given all the secrets of a corrupted agent in its initial knowledge. When an agent is corrupted, it is said to be “destroyed,” that is, it cannot be used during ’s attack. Yet, the security evaluation of a system is done on noncorrupted agents, that is, cannot have access to the secret of an agent after the beginning of its attack.

7.2. Untraceability Notion

First, the model defines several notions of linkability, reinterpretation, and indistinguishability, before giving the untraceability one.

Definition 8 (linkability of subtraces [13]). Two subtraces and are linked, denoted by , if they are instantiated by the same agent:

The notion of reinterpretation has been introduced in [34] in order to show that subterms of a message can be replaced by other subterms if the adversary is not able to understand these subterms. Note that, when is able to understand a subterm, it remains unchanged.

Definition 9 (reinterpretation [13]). A map from run terms to run terms is called a reinterpretation under knowledge set if it and its inverse satisfy the following conditions: (i) if is a basic run term, (ii) if is -tuple, (iii) if or , and is an encryption under key , (iv) if or is not a hash function.

Reinterpretations are used to define indistinguishability of traces.

Definition 10 (indistinguishability of traces [13]). Let be the adversary’s knowledge at the end of trace . The trace is indistinguishable from a trace , denoted , if there is a reinterpretation under , such that for all roles and subtraces .

From all the above notions, the untraceability notion of a role is defined as follows.

Definition 11 (untraceability [13]). An protocol is said to be untraceable with respect to role if: In this paper, if no role is specified, we consider that “untraceability” means “untraceability for role .”

8. Canard et al. [11, 36], 2010

In the same vein as the Vaudenay model, Canard et al. proposed in 2010 a security model that comprises the properties of (strong) correctness, soundness, and untraceability. We only present the last notion. Contrary to Vaudenay, the authors only defined untraceability (and not privacy in general) and their main goal was to use the strongest adversary of the Vaudenay model. During the following, this model will be denoted CCEG.

8.1. Oracles

As for Vaudenay, is empty after the setup of the system, and a tag can be either or . Then has access to all the generic oracles. It may also use the following ones.(i)DrawTag works similarly as the one of Vaudenay. It first randomly and uniformly selects tags between all existing (not already ) ones. For each chosen tag, the oracle gives it a new pseudonym denoted by and changes its status from to . Finally, since cannot create here fake tags, then the oracle only outputs all the generated pseudonyms in any order. If there is not enough tags (i.e., less than ), then the oracle outputs . All relations are kept in an a priori secret table denoted by . (ii)Free works exactly as the one of Vaudenay.

8.2. Untraceability Experiment

From the oracles given above, CCEG defines three classes of polynomial-time adversaries for the untraceability experiment.

Definition 12 (adversary class [11]). An adversary class is said to be(i) if has access to all the oracles; (ii) if cannot use anymore a “corrupted” tag (i.e., the tag has been destroyed); (iii) if has no access to the Corrupt oracle;

The authors do not define the adversary class introduced in the Vaudenay model (see Section 5 for more details). They consider that the model aims to be as powerful as possible: the notion weakens the adversary.

A link is a couple of pseudonyms associated to the same identifier in . Some links are considered obvious (e.g., both and have been corrupted). Therefore, the authors define the notion of nonobvious link. As remark, links are chronologically ordered, that is, means that has been freed before has been drawn.

Definition 13 (nonobvious link (NOL) [11]). is a nonobvious link if and refer to the same in and if a “dummy” adversary , that only has access to CreateTag, DrawTag, Free, and Corrupt, is not able to output this link with a probability better than . Moreover, a nonobvious link is said to be (i)standard if has not corrupted or ; (ii)past if has corrupted ; (iii)future if has corrupted .

Note that this model uses a “dummy” adversary , instead of a blinded adversary as in the Vaudenay model. Both adversaries are equivalent but not identical. Indeed, the main difference is that Vaudenay’s blinder is an entity clearly separated from . Therefore does not know the random choices done by the during the experiment. On the opposite in CCEG, is a single entity, and consequently it is aware of its random choices.

A adversary is only able to output a standard NOL as it cannot query the Corrupt oracle. A adversary is not able to output a future NOL as a tag corruption destroys the tag (and thus prevents the tag from being again). However, this adversary can output a standard or past NOL. Then, a adversary is able to output every NOL.

CCEG’s untraceability experiment is given in Box 6. is the adversary class, .

8.3. Untraceability Notions

With the previous experiment, the CCEG untraceability of a system is proved if no adversary is able to output a NOL with a probability better than the one of the dummy adversary .

Definition 14 (untraceability [11]). An RFID system is said to be -untraceable (resp., -untraceable/-untraceable) if, for every (resp., /) adversary running in polynomial-time, it is possible to define a “dummy” adversary that only has access to oracles CreateTag, DrawTag, Free, and Corrupt such that

Direct implications are made from these notions:

The main result of this paper is that -untraceability (the strongest privacy property) is achievable.

9. Deng et al. [12], 2010

Also in 2010, Deng et al. proposed a new framework based on zero-knowledge formulation to define the security and privacy of RFID systems. Here, we only present the zero-knowledge privacy (denoted -privacy), which is a new way of thinking in privacy for RFID. This model, denoted DLYZ in the sequel, is part of the unpredictability models family [12, 14, 17, 19]. They all rely on the unpredictability of the output returned by a tag or a reader in a protocol execution. In this paper, we decide to only present DLYZ since it is the most achieved model of this family.

9.1. Considered Protocol

This model considers that an RFID protocol execution is, w.l.o.g., always initialized by , and consists of rounds for some . Each protocol execution is associated to a unique identifier . At each execution, a tag may update its internal state and secret key, and may update its internal state and database. The update process (of the secret key or the internal state) on a tag always erases the old values. The outputs bits and (equal to 1 if and accept the protocol execution with identifier , or 0 otherwise) are publicly known. Note that the authors claim that each tag has its output bit if the authentication protocol is not mutual. However, we consider this fact too limiting since can have an output (possibly known by ), even if it may not authenticate the reader. For instance, its output can be “I arrived correctly at the end of the protocol on my side.”

DLYZ assumes that a tag may participate to at most executions in its life with ; thus is involved in at most executions, where is polynomial in and is the total number of tags involved in the system.

9.2. Oracles

In a nutshell, DLYZ aims to analyze protocols where entities’ secrets may potentially be updated at every protocol execution. Therefore, the model automatically enumerates the internal information of each entity. At the initialization of the system, the database is in an initial state, called , and already stores the secrets of all the tags, that is, a has already been performed on every tag. The only differences in the initialization are the following: (i) additionally generates ’s initial internal state ; (ii) associates to every tag a triplet , which is, respectively, ’s public parameter, initial secret key, and initial internal state.

This information is stored in . Finally, let denote the public parameters of the system . At the end of the system’s initialization, all the tags are accessible to the adversary.

Then, has access to the following modified generic oracles. (i)Launch makes launch a new protocol execution and generates the -round message which is also used as the execution identifier . If this is the new execution run by , then stores into its internal state .(ii)SendTag sends to . The output response of is as follows. (1)If currently does not run any execution, then (a)initiates a new execution with identifier , (b)treats as the -round message of the new execution, (c)and returns the -round message . (2)If is currently running an incomplete execution with identifier and is waiting for the message from (), then works as follows: (a)if , treats as the message from and returns the next round message ; (b)if (i.e., the last-round message of the execution), returns its output and updates its internal state to (where corresponds to the execution run by , where ).(iii)SendReader sends to for the execution with identifier . After receiving , checks from its internal state whether it is running such an execution, and ’s response is as follows. (1)If is currently running an incomplete execution with identifier and is waiting for the message from a tag (), then works as follows: (a)if , treats as the message from the tag and returns the next round message ; (b)if , returns the last-round message and its output and updates its internal state to and the database (where corresponds to the execution run by ).(2)In all the other cases, returns (for invalid queries).(iv)Corrupt returns the secret key and the internal state currently held by . Once is corrupted, all its actions are controlled and performed by .

For a completed protocol execution with identifier , the transcript of the exchanged messages is , excluding the entities’ outputs.

Let denote the set of these four oracles. denotes a PPT adversary that takes on input the system public parameters , the reader , and the tags set of the already initialized system. Then interacts with and the tags of via the four oracles. denotes a PPT adversary equivalent to , where generally includes or some historical state information of . Then interacts with and the tags set via the four oracles. is said to have a blinded access to a challenge tag if it interacts with via a special interface (i.e., a PPT algorithm which runs internally and interacts with externally). To send a message to , sends a SendTag to ; then invokes with SendTag and answers ’s output to . does not know which tag is interacting with it. interacts with via SendTag queries only.

Definition 15 (clean tag [12]). A tag is said to be clean if it is not corrupted (i.e., no query to Corrupt on ) and is not currently running an incomplete execution with (i.e., ’s last execution is either finished or aborted).

The main goal of this definition is to force the adversary to use some uncorrupted and nonrunning tags to proceed the -privacy experiment (see next section). This notion of nonrunning tags is very similar to the TagInit oracle of JW.

9.3. Privacy Experiments

In the experiments, a PPT CMIM (concurrent man-in-the-middle) adversary (resp., PPT simulator ) is composed of a pair of adversaries (resp., ) and runs in two stages. Note that, if , then no challenge tag is selected, and is reduced to in the experiment.

The first experiment given in Box 7 is the one performed by the real adversary . After the system initialization, plays with all the entities and returns a set of clean tags . Then from this set , a challenge tag is chosen at random. Then plays with all the entities, including the challenge tag via the interface , except the set of clean tags. At the end, outputs a view of the system.

Then, the second experiment given in Box 8 is the one performed by the simulator . As in the previous experiment, plays with all the entities and returns a set of clean tags . Then from this set , a challenge tag is chosen at random, but is not informed about its identity and cannot play anymore with this tag. Then plays with all the entities, except the set of clean tags. At the end, outputs a simulated view of the system.

9.4. Privacy Notions

From the previous experiments, the -privacy of a system