In current single sign-on authentication schemes on the web, users are required to interact with identity providers securely to set up authentication data during a registration phase and receive a token (credential) for future access to services and applications. This type of interaction can make authentication schemes challenging in terms of security and availability. From a security perspective, a main threat is theft of authentication reference data stored with identity providers. An adversary could easily abuse such data to mount an offline dictionary attack for obtaining the underlying password or biometric. From a privacy perspective, identity providers are able to track user activity and control sensitive user data. In terms of availability, users rely on trusted third-party servers that need to be available during authentication. We propose a novel decentralized privacy-preserving single sign-on scheme through the Decentralized Anonymous Multi-Factor Authentication (DAMFA), a new authentication scheme where identity providers no longer require sensitive user data and can no longer track individual user activity. Moreover, our protocol eliminates dependence on an always-on identity provider during user authentication, allowing service providers to authenticate users at any time without interacting with the identity provider. Our approach builds on threshold oblivious pseudorandom functions (TOPRF) to improve resistance against offline attacks and uses a distributed transaction ledger to improve availability. We prove the security of DAMFA in the universal composibility (UC) model by defining a UC definition (ideal functionality) for DAMFA and formally proving the security of our scheme via ideal-real simulation. Finally, we demonstrate the practicability of our proposed scheme through a prototype implementation.

1. Introduction

Authenticated Key Exchange (AKE) is one of the most broadly used cryptographic primitives that enable two parties to create a shared key over a public network. Typically, the parties need to have authentication tokens, e.g., cryptographic keys (asymmetric or symmetric high-entropy keys) or short secret values (low-entropy passwords). They also securely store these authentication tokens in a trusted service provider during the registration phase. There are various types of authentication factors such as knowledge, possession, and physical presence; low-entropy passwords are widely present in practice. An example of an authentication protocol that relies on passwords is Password-Based Authenticated Key Exchange (PAKE) [1].

However, passwords are usually vulnerable to both online and offline attacks [2, 3]. An attacker who compromises the data stored with the service provider (user account data, consisting of usernames and associated (potentially salted) password hashes) can run an offline dictionary attack on that data. Such an attack leads to the disclosure of user accounts and this has happened several times in the past, cf. [2, 4, 5]. Even if low-entropy passwords are correctly salted and hashed, they still do not resist the brute force of modern hardware. Already in 2012, a rig of 25 GPUs could test up to 350 billion guesses per second in an offline dictionary attack [6].

Multi-Factor Authentication (MFA) schemes overcome this risk by adding additional authentication factors. MFA combines (low-entropy) passwords with, e.g., secret values stored in physical tokens. Recent advancements in fingerprint readers and other sensors have led to the increased usage of smartphones and biometric factors in MFA schemes (e.g, the use of biometrics to securely retrieve private information [8]) Figure 1, although these methods make the guessing of authentication factors more difficult. However, some MFA schemes incorporate password authentication and second-factor authentication as separate mechanisms and store a salted password hash (or biometric) on the server, leading to different vulnerabilities such as spoofing and offline attacks [7, 9]. In other words, an adversary compromising the server is still able to recover the actual password (even if that password is no longer usable without the additional associated factors). Moreover, mobile devices (smartphones, wearables, FIDO U2F, etc.) are considered more likely to be subject to loss or theft, and particularly smartphones and wearables open a large, high-risk attack surface for malware [10, 11].

In general, authentication schemes are designed to uniquely identify a user. Consequently, they do not aim at protecting user privacy, and users’ activity in the digital world can easily be logged and analyzed. Leakage of individual information may have serious consequences for users (including financial losses). To meet the increasing need of privacy protection in the digital world, multi-factor authentications are enhanced with privacy-preserving technologies. For instance, anonymous authentication schemes allow a member of a legitimate group, called a prover, to convince a verifier that it is a member of the group without revealing any information that would uniquely identify the prover within the group. Various schemes for anonymous password authentication have been proposed, e.g., [1215]. In particular, anonymous password authentication promises unlinkability: The prover (e.g., the server of a service or identity provider) should not be able to link user authentications. Therefore, for any two authentication sessions, the prover is unable to determine if they have been performed by the same user or two different users.

1.1. Building a Fully Decentralized Authentication Architecture

An Identity Provider (IDP) with a centralized database of authentication data of all users could easily provide an MFA scheme and offer convenient single sign-on (SSO) to other services for its users [16]. SSO allows users to once receive a single token (identity) provided by IDP and repeatedly authenticate themselves to servics providers. Several initiatives such as PRIMA [17], OAuth [18], SAML [19], and OpenID [16] let service providers take advantage of another centralized identity provider to authenticate users without becoming responsible for managing account passwords. In all these systems, the authentication follows a similar scheme (see Figure 2) [20]:(1)In the registration phase, the user creates credentials (e.g., a username/ID and a password) and passes them to the IDP (a trusted server) which stores the username together with the hash of the password.(2)In the authentication phase, the IDP verifies the user-supplied sign-on credential by matching the username and password hash.(3)After successful verification, the IDP issues an authentication credential (a digital signature or a message authentication code) using a master secret key that authenticates the user to the service provider (e.g., a website) they want to visit.

However, this kind of centralized system poses several challenges:(1)The IDP represents a single point of failure and an obvious target for attacks, such as:(a) extraction of the secret key to forge tokens, which enable access to arbitrary services and data in the system; (b) capturing hashed passwords (or biometrics) to run offline dictionary attacks in order to recover user credentials, both potentially resulting in severe damage to the reliability of the system [20].(2)The IDP is actively involved in each authentication session and can, therefore, track user activity, leading to serious privacy issues [21, 22].(3)The IDP takes a significant amount of control over the digital identity away from the user. Users cannot fully manage and store their identity by themselves but always need to rely on and interact with an available IDP that offers the identity management system to them and the service provers they want to interact with (active verification).

1.2. Our Contribution

To address the above challenges, we construct a novel decentralized privacy-preserving single sign-on scheme using a new Decentralized Anonymous Multi-Factor Authentication (DAMFA) scheme, where the process of user authentication no longer depends on a single trusted third party. Instead, it is fully decentralized onto a shared ledger to preserve user privacy while maintaining the single sign-on property. That is, users do not need to register their credentials with each service provider individually. The scheme also permits services where authenticating users remain anonymous within a group of users. Subsequently, our scheme does not require the IDP to be online during the verification (passive verification). Moreover, since there is no single third party (i.e., the IDP) in control of the whole authentication process, user and usage tracking by the IDP is inhibited.

The passive verification property of our scheme allows service providers to authenticate users at any time without requiring additional interaction with an IDP except what is available on the shared ledger. This property removes the cost of running secure channels between the service provider and the identity provider. Simultaneously, the IDP is eliminated as a single point of failure and attack within the authentication process.

The scheme relies on personal identity agents as auxiliary devices that assist the user in the authentication process. The personal identity agents participate in a threshold secret sharing scheme to store the distributed private key of their users. In the authentication phase, the user unlocks their private key through a combination of biometrics and a password, combining biometric, knowledge, and possession factors. The distributed architecture prevents offline attacks against data extracted from compromised agents, as long as only a set of agents below the threshold is compromised or corrupted.

We define the ideal functionality and real-world definitions for the security of our DAMFA scheme. We prove our construction’s security via ideal-real simulation, showing the impossibility of offline dictionary attacks. Finally, we demonstrate that our protocol is efficient and practical through a prototypical implementation and through a comparison of our scheme with other SSO works.

2.1. Single-Factor (Password) Authentication Key Exchange

For a long time, knowledge was (and still is) used as a primary means of authentication. Single-factor authentication based on passwords and PINs is a mechanism that is well-studied. Bellovin and Merritt [24] proposed Encrypted Key Exchange (EKE) where a client and a server share a password and use it to exchange encrypted information to agree on a common session key. EKE was followed by several enhancements (cf. [2527]). Bellare et al. [1] expanded this to a general formal provable model for Password Authentication Key Exchange (PAKE). After that, two generic schemes of PAKE were proposed by Gennaro and Lindell [28] and by Groce and Katz [29] which are among the most efficient ways of constructing PAKE in the standard model.

Benhamouda and Pointcheval [30] explicitly introduce a verifier into the authenticated key exchange, where a verifier is a hash value or transformation of the secret password with a public salt , and the server stores the pair for each user.

2.2. Multi-Factor Authentication

A single knowledge-based authentication factor has the disadvantage that an adversary needs to only compromise that single factor. Multi-factor authentication (MFA) overcomes this by combining multiple different factors. The widely used combination is long-term passwords with secret keys, possibly stored in tokens (e.g., FIDO U2F). Shirvanian et al. [31] introduce a framework to analyze such two-factor authentication protocols. In their framework, the participants are a user, a client (e.g., a web browser), a server, and a device (e.g., a smartphone). In the authentication phase, the user sends a password and some additional information provided by the device. In most existing solutions, including Refs. [3133], during the registration process, the user gets a value called the “token,” while the server records a hashed password. During the authentication phase, the two required factors (the password and the token) are sent to a verifier.

Jarecki et al. [34] provide a device-enhanced password-authenticated key exchange protocol employing mobile device storage as a token. This setting serves two purposes: Firstly, for an adversary to successfully mount an offline dictionary attack, they must corrupt the login server in addition to the mobile device storage. Secondly, the user must confirm access to the mobile device storage during login.

Another popular factor used to authenticate users to remote servers is biometrics [3538]. Fleischhacker et al. [39] also propose a modular framework called MFAKE which models biometrics following the liveness assumption of Pointcheval and Zimmer [37]. However, Zhang et al. [40] demonstrate that their scheme does not adequately protect privacy. Indeed, biometric authentication becomes a weak point when the framework directly uses the biometric template for authentication. In addition, it requires to, respectively, execute a lot of sub-protocols which makes the scheme inefficient.

2.3. Anonymous Authentication

Another approach towards user authentication is the anonymous password authentication protocol proposed by Viet et al. [12]. They combine an oblivious transfer protocol and a password-authenticated key exchange scheme. Further enhancements were proposed by Refs. [14, 15, 38].

An anonymous authentication protocol permits users to authenticate themselves without disclosing their identity and becomes an important method for constructing privacy-preserving authenticated public channels.

Zhang et al. [40] presented a new anonymous authentication protocol that relies on a fuzzy extractor. They consider a practical application and suggest several authentication factors such as passwords, biometrics (e.g., fingerprint), and hardware with reasonably secure storage (e.g., smartphone).

2.4. Summary of Related Works

Single-factor authentication based on passwords is a primary means of many authentication protocols [1, 25, 28, 41]. Multi-factor authentication (MFA) overcomes the problem of compromise in a single factor by combining multiple different factors [31, 34, 40, 42, 43]. An anonymous authentication protocol permits users to authenticate themselves without disclosing their identity [12, 14, 15, 44]. Finally, SSO allows users to once receive a single token provided by IDP and repeatedly authenticate themselves to service providers [16, 17, 19, 45, 46].

3. Building Blocks

3.1. Pointcheval and Sanders Signature

Our work relies on the credentials scheme proposed by Pointcheval and Sanders [47]. The scheme works in a bilinear group of type 3, with a bilinear map and has the following algorithms:(1): Choose a bilinear group with order , where is a prime number. Let be a generator of , and a generator of . The system parameters are(2): Choose a random secret key . Parse , and publish the verification key(3): Parse . Pick a random element , and output(4): Parse as and check whether and are both satisfied. In the positive case, it outputs 1, otherwise 0.

The signature is randomizable by choosing a random and computing . The above scheme can be modified to obtain a signature on a hidden message (commitment) and also offers a protocol to show a zero-knowledge proof of a signature .

3.2. Oblivious Pseudo-random Function (OPRF)

A pseudo-random function (PRF) is a function that takes two inputs: a secret function key and a value to compute on. It outputs , a function picked randomly from a PRF family, which is secure if it is distinguishable from a random function with the same domain and range with a negligible probability for all probabilistic polynomial time (PPT) distinguishers. An oblivious PRF (OPRF, cf. [48]) is a protocol between two parties (a sender and a receiver) that securely computes where both and are the inputs of sender and receiver, respectively, such that no party learns anything except for the input holder that learns .

A threshold OPRF (TOPRF, cf. [49]) is an extension of the OPRF which allows a group of servers to secret share a key for a PRF with a shared PRF evaluation protocol which lets the user compute on an input , so that both and are secret if no more than of servers are corrupted (see Figure 3).

A formal definition of the TOPRF protocol as a realization of the TOPRF functionality is given in Figure 4. Note that we just duplicate these functionalities so that readers can easily follow our ideal functionality and construction (for more details see [49]).

3.3. Secret Sharing Scheme

A secret sharing scheme consists of two PPT algorithms [50]: First, generates shares of the secret key as , and second uses shares to retrieve the primary secret value as . The security assumption of this scheme is that any amount of shares below the threshold does not disclose any info about the secret key.

3.4. Public Append-Only Ledger

A ledger allows us to keep a list of public information and maintains the integrity of the dataset. It guarantees a consistent view of the ledger for every party. Every user can insert information into the ledger and, once some data are uploaded, nobody can delete or modify it. Moreover, the ledger assures the correctness of pseudonyms and guarantees that no one can impersonate another participant to release information. Furthermore, it distributes up-to-date data to all participants. In this paper, we assume this assumption holds and construct our system on the blockchain technique as a public append-only ledger (blockchain). There are already some works constructing advanced applications based on this assumption, such as Refs. [5153]. Yang et al. [54] formally define a public append-only ledger, which we use for constructing our DAMFA system (see Figure 5).

executes the following steps with parties and an ideal adversary as follows:(1). creates an empty list in the beginning.(2). On input , checks that is a valid pseudonym for , then stores the tuple to and declares to that a new item was appended to the list .(3). On input , returns the list to .

3.5. Zero-Knowledge Proof of Knowledge

In a zero-knowledge proof of knowledge system [55], a prover proves to a verifier that it possesses the witness for a statement without revealing any additional information. In this paper, we use noninteractive zero-knowledge proofs known as Fiat-Shamir heuristic [56] as they have the advantage of being noninteractive. For example, denotes a noninteractive zero-knowledge proof of the elements and as that satisfies both and . Values are assumed to be hidden from the verifier. Similarly, the algorithm can admit a message as input, thus it is also called signature proof of knowledge denoted as .

3.6. Dynamic Accumulators

A dynamic accumulator is a primitive allowing a large set of values to be accumulated into a single quantity, the accumulator. For each value, there exists a witness which is the evidence attesting that the value is indeed contained in the accumulator. The proof of showing that a value is part of an accumulator can be zero-knowledge proof, which reveals neither the value nor the witness to the verifier. Camenisch et al. [57] define a concrete construction of dynamic accumulators with the five algorithms AccSetup, AccAdd, AccUpdate, AccWitUpdate, and AccVerify:(1)AccSetup: This is the algorithm to output the public parameters. Select bilinear groups with a prime order and a bilinear map . Select . Select . Generate a key pair and for a secure signature scheme. Compute and publish and as the public parameters.(2)AccAdd . Compute and a signature on under signing key . The algorithm outputs , an updated accumulator value , and .(3)AccUpdate: This is the algorithm to compute the accumulator using the public parameters. The accumulator of is computed as(4)AccWitUpdate: This is the algorithm to compute the witness that values are included in an accumulator, using the public parameters. Given and the accumulator , the witness of values in is computed as(5)AccVerify: This is the algorithm to verify that values in are included in an accumulator, using the witness and the public parameters. Given , , and , accept if

As Camenisch et al. [57] point out, the purpose of an accumulator is to have accumulator and witnesses of size independent of the number of accumulated elements.

3.7. Pedersen Commitments

Using a commitment scheme, users can bind themselves to a chosen value without revealing the actual value to a third party receiving the commitment. Thereby, a user cannot change their choice (binding), and, at the same time, the recipient of a commitment does not learn anything about the actual value the user committed to (hiding of the value). Pedersen commitments [58] have a group of prime order and generators as public parameters. For committing to the value , a user picks a random and sets .

4. Decentralized Anonymous Multi-Factor Authentication (DAMFA)

We build a new practical decentralized multi-factor authentication scheme, Decentralized Anonymous Multi-Factor Authentication (DAMFA), where the process of user authentication no longer depends on a single trusted third party. The scheme also permits services where authenticating users remain anonymous within a group of users. Subsequently, our scheme does not require the IDP to be online during the verification. To protect the private key of their user, we use personal identity agents as auxiliary devices that participate in a threshold secret sharing scheme to store the distributed private key of the user.

4.1. System Model

The overall system model of DAMFA is shown in Figure 6. The protocol is executed between four participants:(1)User : A user who wants to access various services offered by different service providers. During the registration phase (which runs only once), obtains a biometric template from a sensor and chooses a password . In the authentication phase, users interact with a set of personal identity agents to authenticate themselves in an anonymous manner.(2)Personal identity agent : We associate each user with a set of personal agents which are auxiliary devices that assist a user in creating a credential for authentication. These personal agents remain under the administrative control of their associated users, who can freely choose where to run them. For example, they could be run on a smart home controller, at a cloud provider, or even on a mobile phone. U generates a private key and executes threshold secret sharing on the private key to generate secret shares of that private key. The user stores the secret shares among their personal agents such that each PAi has one share of the overall secret key.(3)Service provider (verifier) : These are the service providers (untrusted and distributed servers) that require authentication from a user . After verifying a user’s credentials, they provide access to the corresponding service.(4)Identity provider : The identity provider is an entity that issues credentials to users. These credentials grant permission to use specific services by proving membership of a specific permission group (clients, employees, department members, account holders, subscribed users, etc.).

In addition, users act as nodes in the blockchain network: They collaboratively maintain a list of credentials in a public ledger (blockchain) and enforce a specific credential issuing policy when adding to that list. For more details on how these steps work, we refer to subsection 4.3., High-Level View.

4.2. Threat Model

In order to demonstrate the security of the proposed protocol, we determine the capabilities and possible actions of an attacker. We consider a PPT attacker who has perfect control of the communication channels. They can eavesdrop all messages in public channels and also modify, add, and remove messages on the network. The attacker can, at any time, corrupt of the user’s agents (no more than threshold ), in which case the attacker knows all the long-term secrets (such as private keys or master shared keys).

In the proposed protocol, we consider some privacy requirements such as unlinkability, identity privacy, and user data privacy: Unlinkability means that an adversary cannot distinguish a user who is authenticating from any (other) user who has authenticated in the past. Identity privacy means that an adversary cannot determine if a given authentication credential belongs to a specific user. User data privacy means that an adversary cannot learn anything about the user’s sensitive authentication data (i.e., biometric data, password).

4.3. High-Level View

To build a fully decentralized authentication architecture, we need to set up a small distributed shared database (to store credentials) between nodes. Data are highly available, but nobody has control over the database. Furthermore, users would never want to modify data in the past. User data need to be immutable, and data should be publicly accessible. We employ a public append-only ledger in order to fulfill our requirements. A ledger (blockchain) maintains the integrity of the dataset and guarantees a consistent view of the data for every party. Every participant can append information to the ledger and, once uploaded, nobody can delete or modify the data.

Definition 1 (DAMFA). A DAMFA system consists of a global transaction ledger instead of a single party representing the organization. Moreover, the DAMFA scheme consists of the following phases:(1): In the setup phase, we define the public parameters and execute the following algorithm: generates a private key and executes threshold secret sharing (TSS) on the private key to generate shares of that secret. The user stores the secret shares among their personal agents (similar to the initialization of TOPRF [49], done via a distributed key generation for discrete-log-based systems, e.g. Ref. [59]).(2): In the registration phase, the user first selects a password and collects their biometric at a sensor. Then, runs the TOPRF protocol by interacting with personal agents to reconstruct the TOPRF secret key. After that, the IDP issues a membership credential that shows that is a valid member (employee, account holder, subscribed user, etc.). For this purpose, sends a request with a pseudonym and a (noninteractive) zero-knowledge proof (NIZK) which indicates they are the owner of the pseudonym (they know the secret key that belongs to the pseudonym) and authenticate themselves to the IDP. Then, receives a membership credential, which is a signature on their pseudonym. The user creates a pseudonym and verification information, namely, a protected credential , by encrypting the membership credential with the TOPRF secret key. Subsequently, computes a NIZK proof that (1) the credential and the pseudonym contain the same secret key and (2) proof of knowledge of the signature which is issued by the ID provider (i.e., she has valid group membership). Note that the user can execute these actions in an offline state because no interaction with the public ledger is required. Finally, nodes accept the credential to the ledger if and only if this proof is valid.(3): The user attempts to access the services of an in an anonymous and unlinkable way. authenticates the user if and only if the user provides a valid credential. First, a service provider sends an authentication request (which is a signature) to . The user inserts the password and the biometric and runs the TOPRF protocol by interacting with personal agents to reconstruct the TOPRF secret value. first scans the public ledger to obtain the accumulator , which is a set consisting of all credentials belonging to a specific IDP. Then, finds their own protected credential within this set (via the pseudonym ). decrypts using the TOPRF secret key and recovers the initial credential (a signature from IDP). presents the credential under a different pseudonym by proving in zero-knowledge that (1) they know a credential on the ledger from IDP, (2) the credential opens to the same secret key as their own pseudonym , and (3) they prove possession of a membership credential from IDP (the signature), cf. [52]. scans the public ledger to obtain the accumulator which is a set consisting of all credentials belonging to a specific organization. Then, it checks the validity of the candidate credential by finding the candidate credential in the set and checking proof of knowledge on the credential and pseudonym.

4.4. The DAMFA Functionality

We formally define the proposed scheme’s security by presenting its ideal functionality that is implemented via a trusted party with a public ledger. All communication takes place through this ideal trusted party. In the UC framework [60, 61], there may be some copies of the ideal functionality running in parallel. Each one is supposed to have a unique session identifier (SID). Each time a message is sent to a specific copy of functionality, such that this message contains the SID of the copy that is intended for. As noted in Ref. [49], we also use the ticketing mechanism, which ensures that in order to test a password and biometric guess, the attacker must impersonate agents. To this end, they define a counter for each in which the parameter is also used to identify it. In addition, when an agent completes its interaction, the functionality increases the counter . On the other hand, when a user, either honest or corrupt, completes an interaction that is associated to PAi, decreases by 1. It ensures that for any honest agent PAi, the number of user-completed OPRF evaluations with PAi is no more than the number of agent-completed OPRF evaluations of . It sets agent tickets for accessing the proper TOPRF result by reducing (nonzero) ticket counters for an arbitrary set of agents in SI. The ideal functionality as:

4.4.1. Registration

(i)Upon receiving for from , records this message and sends to (Ignores other Reg cmd). Computes a secret key using TOPRF protocol and if then sends to .(ii)Upon receiving from , if a record exists and then marks as active and sends to .(iii)Upon receiving from , if the record exists and all agents in are marked active, then runs a commitment scheme and an encryption to get , respectively, and sets the pseudonym as and as the credential. It records , sends and to its public ledger and , respectively.

4.4.2. Authentication

(i)Upon receiving for from , retrieves , records , and sends to . Ignores future Auth commands involving the same ssid.(ii)Upon receiving from , if is marked active then sets (sets it to 1 if it is undefined) and sends to .

4.4.3. Password and Biometric Test

(iii)After receiving from , if then sets tested and and , retrieves and if and if and , then returns to and marks the record compromised and responses to with “correct guess”, else returns FAIL.

4.4.4. Authentication for Service Provider

(i): Every participant can obtain all data in the public ledger of the trusted party via submitting a “retrieve” request to . then retrieves the intended credential issued by from and accepts functionality’s assertion only if .(ii)Key generation: Upon receiving , for from , if there is a record , where then do:(a)If this record is compromised so that and or (), then output to player .(b)Else, if this record is fresh, and if there is a record with and , then sends (a random key) to player .(c)In any other case, picks a random key and sends to .

Definition 2 (Secure DAMFA). Let be a probabilistic polynomial time protocol for the DAMFA functionality. We say that is secure if for every PPT real world adversary attacking DAMFA, there exists a PPT ideal world simulator such that for both the real and ideal world interactions, outputs of registration and authentication phases are computationally indistinguishable: .

4.5. Our Construction
4.5.1. Setup Phases:

We select a bilinear pairing that is efficiently computable, nondegenerate, and three groups with prime order . We let and be generators of and , respectively, and the generator of . Note that it is assumed to support one-way Bio-hash function , which resolves the recognition error of general hash functions [62]. We consider two additional hash functions as and . We publish as the set of system parameters where . The user generates a private key , then executes a secret sharing construction scheme on to create secret keys for each personal agent . stores secret shares among personal agents.

4.5.2. Registration Phase

To register a user to the system, first chooses a password and scans her biometric impression at the sensor. Then, runs the following steps to register herself in the system.(i)A user runs TOPRF protocol [49] with agents to compute the secret value as follows:(a)The user picks a random number and computes and sends the message to all .(b)Upon receiving the message from the user, each computes by Lagrange interpolation coefficients and secret key (s.t. ). They return the message to .(c)After receiving all the messages from personal agents, computes: .(ii)In order to obtain a membership credential from IDP, we use PS signatures protocol [47] to derive a signature on a hidden committed message as follows:(a): The IDP runs this algorithm to generate private and public keys. This algorithm selects , computes and , and sets and .(b)Protocol. A user first selects a random and computes , which is a commitment on her secret key. She then sends to the IDP. They both run a proof of knowledge of the opening of the commitment (authentication). If the signer is convinced, the IDP selects a random and returns . The user can now unblind the signature and get a valid signature over her secret key and the message by computing described in Sect. 3.1.(c)Verify. To verify this signature, the user can execute this algorithm and compute:(d): .(iii)CreatePC. The user generates a protected credential with TOPRF secret key derived from the password and the biometric: picks a random number to generate a pseudonym as and computes an El-Gamal encryption of the credential with secret TOPRF values into a ciphertext as: .(iv)Proof. A NIZK proof of knowledge of the credential (PS signature [47]) works as follows: selects random and computes . sends to the verifier and carries out a zero-knowledge proof of knowledge (such as the Schnorr’s interactive protocol) of , , and such that(v)At the end of this phase, submits the resulting values to the public ledger nodes where is a proof of knowledge on the and the . If the signature verifies successfully, output 1, otherwise 0. The nodes should accept values to the ledger if this algorithm returns 1.

4.5.3. Authentication Phase

In this phase, a user authenticates herself to the service provider and establishes a session key with the service provider. The following steps are executed by , , and :(i)First of all, the server chooses a secret key and computes . Then, generates a signature on message (i.e., Schnorr’s signature [55]) using its secret key and sends the message to the user.(ii)When receiving a pair , the client verifies whether is valid on message under the ’s public key. If is valid, inserts and scans her personal biometric impression at the sensor.(iii)The user interacts with personal agents and runs the necessary steps to compute the TOPRF protocol . Then, decrypts ciphertext with the TOPRF secret key to recover the credential .(iv)Show: The user creates a NIZK ensuring that the credential is well-formed and the credential related to the same secret values as her pseudonym. Here we prove: (1) she knows a credential on the ledger from the IDP, (2) the credential includes the secret key as her pseudonym, (3) she possesses of a credential (signature). We use the bilinear maps accumulator [57] to accumulate the group elements instead of, e.g., the integers . In addition, Camenisch et al. [57] describe an efficient zero-knowledge proof of knowledge such as Schnorr’s protocol [55, 56] that a committed value is in an accumulator. See Refs. [57, 63] to find how this proof works.U runs the following steps to authenticate herself:(a)The user selects a random number to generate a pseudonym for communication with service providers.(b) picks random numbers and computes a randomized commitment credential (like in the previous step) as .(c)Then, calculates , a secret session key and .(d)For a set of credentials , computes an accumulator and witness as and , carries out a zero-knowledge proof of knowledge of the credential, and outputs the following proof of knowledge such thatFinally, U sends the message , to the service provider.(i)After receiving the message , from the user, the service provider first scans through the ledger to obtain a set consisting of all credentials belonging to IDP. First, computes the accumulator . Then, it verifies that is the aforementioned proof of knowledge on and using the known public values. If the proof verifies successfully, output 1, computes the session key as follows: .

Then, SP computes and checks . If and holds, accepts as the session key and also the user is authentic.

Note that we can simply send alongside the message of the proof of knowledge. With this, we can prove the construction is a -protocol (see Ref. [47] to see how proof of knowledge of PS signature works).

4.6. Optimization

To exploit the accumulator in our construction which can be computed incrementally, we consider that any node mining a new block can add this block’s accumulator to the previous one. The node stores the result as a new accumulator value in the transaction at the beginning of the new block, namely, the accumulator checkpoint. Peer nodes validate this computation before accepting the new block into the blockchain. With this optimization, no longer needs to compute the accumulator . Instead, can merely reference the current block’s accumulator checkpoint and compute the secret key starting from the checkpoint preceding her mint (instead of starting at the beginning).

Theorem 1. Our proposed protocol is secure against any nonuniform PPT adversary corrupting many personal agents by assuming that the El-Gamal encryption, zero-knowledge proof of signature, and the TOPRF protocol are secure and also the hash function is collision resistant.

4.7. Security Proofs of Theorem 1
4.7.1. Proof Sketch

Our construction DAMFA is modular and relies directly on the TOPRF and the zero-knowledge proof. The security is then straightforwardly inherited from those algorithms:

The credential security requires that no adversary is able to present a credential (guess passwords and biometrics) and generate a session key, which they have not had any access to. If we use a TOPRF on passwords and biometric of users, then the security properties of TOPRF would make it hard to guess. The proof is once again twofold:(i)First, the authentication is done through a zero-knowledge proof. At this step, the adversary presents an invalid credential or manages to build a valid proof. Hence, the adversary breaks the soundness of the underlying proof of knowledge we used, or else uses a valid credential.(ii)At this step, we now assume the adversary wins by using a valid credential. We now rely on the obliviousness of the TOPRF. We interact with a TOPRF challenge to answer every adversarial request, and at the end, we can use the (valid) credential output by the adversary to break the TOPRF obliviousness, which leads to the conclusion.

4.7.2. Anonymity

During the registration phase, when a user reveals her pseudonym but does not (intentionally) reveal her secret key , no adversary should learn any information about the secret key or the identity. Besides, during the authentication phase, a user proves her credential using zero-knowledge proof, which reveals no additional information about her secret key and identity to the SP.

The simulator is essentially an ideal world adversary that interacts with the functionality and the environment . We also assume that our zero-knowledge signature of knowledge includes an efficient extractor and a simulator and also that the signature is unforgeable. To guarantee that the view of the environment in the ideal world is indistinguishable from its view in the real world, it has to invoke the real-world adversary by simulating all other entities for . Then, for the most parts, the simulator follows the action of adversary appropriately.

4.7.3. Description of the Simulator

Once the adversary registers a new user to the system via storing a tuple to the bulletin board, the simulator registers this user in the ideal world via the following process. It makes an interface between honest parties in the real world (which are the user and personal agents denoted by where wlog. since all personal agents in our solution are identical) and corrupted parties in the ideal world (which are the service provider and personal agents denoted by where . The simulator behaves as follows:

(1) Registration. (1)Upon receiving from , ignores it if . Otherwise, records and sends to for all . If sends , records it.

Remark 1. Since simulates in the ideal world, receives whatever they receive from .(2)After receiving from for some , it checks if it has a record of on its list of users. If the user with exists, then retrieves associated with and proceeds. The simulator then employs the knowledge extractor to obtain . If it is not on the list, follows the protocol to register as a user by choosing a random password and . It generates secret shares on for each corrupted personal agent, records , and sends to and .(3)Upon receiving from , retrieves computes a pseudonym and a credential where . It records and sends to its public ledger and where is proof of knowledge. stores in its list of granted credentials.

Remark 2. When an honest user wants to establish a credential through the functionality, the simulator creates a credential and uses the extractor of the signature of knowledge to simulate the associated proof. It then transmits the credential information to the trusted store.

(2) Authentication. (1)Upon receiving where from , retrieves corresponding to as stored in the registration phase. If there is a set stored in the registration phase and is defined, then executes the TOPRF protocol with each personal agent using the password and and receives from and sends to .

Remark 3. The initialization also specifies a parameter used to identify a table of random values that define the proper PRF values computed by the user when interacting with any subset of honest servers from the set SI. An additional parameter , and corresponding tables , can be specified by the adversary to represent rogue tables with values computed by the user in the interaction with corrupted servers (see more on this [49]).(2)Upon receiving from , recovers and corresponding to as stored during the registration phase in the database (ignores this message if no corresponding tuples exist). checks and if each used the correct corresponding = values. Ignores this message if either of the following conditions fails: if then or all servers in are honest. Otherwise, sends to where is a random secret key and sets for as follows:(a)Case 1: Correct employed by the adversary in the real protocol. detects this by verifying that . Therefore, sets and sends in its database to where was sent by .(b)Case 2: Otherwise, incorrect employed by the adversary in the real protocol. detects this by verifying that . So, sets and defines as the set of values and in the dictionary such that is defined. For every in lexicographic order, sets and checks if . If so, sets and breaks the loop. If the above loop processes all and without breaking, sets .(3)On receiving from party and from , recovers corresponding to as stored in step 1. It ignores this message if either of the following conditions fails: If then or if all servers in are honest. Otherwise, picks if it has not been defined and sends to . If (without resulting in the failure of conditions) then adds every to and sends to . If replies , then records it.

Remark 4. employs the ideal user-provided password and biometric test in the ideal world. Therefore, if the adversarial personal agents in the real world acted honestly, it means that the simulator provided correct pairs . Then, the calculated credential and pseudonymous will be valid (consisting in the ledger) since it is computed using the actual password and biometric. On the other hand, if personal agents acted maliciously in the real world, would have detected this in the previous step and would have provided wrong pairs to in the ideal world. So, in both worlds, the response will be invalid.(4)Upon receiving from , forwards to the in the real world.

(3) The Indistinguishability. (i). This is the real world: the system constructed in this work is run between honest parties and parties controlled by the adversary.(ii). This is identical to except that the encryption generated in the registration phase by honest users is replaced with a simulated one. Indistinguishability between and comes from the El-Gamal encryption security properties.(iii). This is identical to except that in TOPRF, each share ( and ) generated by honest users using an actual password and biometric is replaced by and chosen randomly. Since, does not have the correct password and biometric, indistinguishability between and comes from the indistinguishability of the TOPRF algorithm and TSS construction.(a)Reduction 1. The TOPRF security ensures that senders (adversarial personal agents) cannot distinguish between the receiver (the simulated user) input, whether they are the actual password and or another randomly chosen pair of password and biometric .(b)Reduction 2. The TSS security ensures that less than the threshold number of agents cannot reconstruct the secret and also cannot check if the shares are indeed related to the same secret. Therefore, there is no efficient way for the adversary to distinguish this from real behavior since one more agent needs to be corrupted to mount a successful offline attack.(iv). This game is identical to except that an authentication response ( and ), which are two random group elements generated by the adversary will be rejected if the extracted secret key does not fulfill the requirements. Indistinguishability between and comes from the verified consistency of the bilinear pairing algorithm and the simulation breaks the soundness of the underlying proof of knowledge we used before (assuming that there is no hash collision).(v). This is the world simulated by . It is not hard to check that is identical to .

We already know that the possibility of TOPRF and NIZK proofs to break is negligible.

5. Implementation

In this section, we illustrate the practicability of the proposed protocol. To this end, we provide the public ledger part which is realized by well-known blockchains, namely, Namecoin and Ethereum. The results are summarized in Table 1. Here, initial data size shows the size of the blockchain needed for downloading and storage. Initial sync time is the time required to sync and connect to the blockchain. Confirmation time is the time required to confirm that the data are uploaded in the blockchain.

5.1. Namecoin Implemention

The public ledger can be implemented by a blockchain system. One of the smooth ways to realize a public ledger is using Namecoin blockchain. Namecoin allows for registering names and stores related values in the blockchain, which is a securely distributed shared database. It also enables a basic feature to query the database and to retrieve the list of existing names and associated data. Thus, we can store credentials, scan them based on namespace, and then verify them. We execute the following steps in order to participate in the Namecoin system and store credentials by the namecoin id as pseudonyms:(i)We need to install a Namecoin client that has a full copy of the Namecoin blockchain and keep it in sync with the P2P network by fetching and validating new blocks from connected peers. We use implementation of the Namecoin client [64], which can be controlled by HTTP JSON-RPC, command line, or graphical interface. It spontaneously connects to the Namecoin network and downloads the blockchain.(ii)The Namecoin client also creates the user’s wallet, which includes the private key of Namecoin address of the user.(iii)To save credentials in the blockchain, the user needs to register a namespace “id/name” as the owner of the name by paying a very small fee (currently 0.006 4 USD). An id name can be registered using the Namecoin graphical interface or commands “name_new” and “name_firstupdate.” The following description shows how the id name in Namecoin namespace is registered and how those names can be accessed.namecoind name-new id/3608a30756b0...The output will look like this:[“0e0e03510b0b0b7dbba6e301e519693f6.8062121b29f3cd3a6652c238360d0d0″,“9f213ff4a582fd65”].This transaction shows a hashed version of the name, salted with a random value (which is “9f213…” for transaction ID “0e0e0351…”).(iv)The user can store arbitrary data as descriptions (which contains a credential) for Namecoin keys using the JSON format: the following codes can be a simple example of the JSON value of an identity name:namecoind name_firstupdate id/3608...Output:{“description”: “28790de641755e77d1.3382229156f5c26a9dd8a9673006b...”,“namecoin”: “NBvmSUQbRGu...”}Subsequently, the update has been confirmed and transactions have been added to the blockchain. The user has a fully valid credential. To show the credential, scans through the list of added names and retrieves all credentials via a graphical interface or commands like the following code:namecoind name_listOutput:[{“name”: “id/3608a30756b07e...”,“value”: “28790de641755e77d13382.229156f5c26a9dd8a9673006b15...”,“address”: “NBvmSUQbRGunCS...”,“expires_in”: 36000}].(1)Cost:Initially, a reasonable transaction fee of either 0.00 or 0.01 NMC is charged. We can choose this fee based on how fast we want to process a transaction.(2)Latency:Namecoin and Bitcoin both attempt to generate blocks every 10 minutes; on average, it takes nearly 5 minutes to see the data appear on the blockchain. In practice, it then takes the necessary time to solidify the transactions and the data to be verified. For Namecoin, it takes about 2 hours to confirm that the data are uploaded in the blockchain (12 confirmations). That is why name_firstupdate will only be accepted after a mandatory waiting period of 12 additional blocks.

Remark 5. Note that these costs and delays occur only once during the setup and registration phases. They do not affect the authentication phase. Thus, we focus on the computation time of the authentication phase that is frequently used in the authentication system (see Section 5.3).

5.2. Ethereum

Ethereum allows us to test our decentralized application on a local blockchain; we use a test network called Rinkeby to build our decentralized application. We can connect to the Ethereum blockchain and even perform operations such as mine blocks, send transactions, and deploy smart contracts by running an Ethereum node.(i)We run the Ethereum wallet (minst or geth command line) in order to access to Ethereum protocol and deploy our smart contract.(ii)To start, we need to sink the Rinkeby network locally and download blockchain which takes a few hours.(iii)Create an account:Enter a password for your RinkebyAccount by geth command line or.Ethereum graphic (Minst).Geth Version: 1.8.1-stable.creates an account using gethcommand: geth account new.(iv)Next, obtain some Ether so that transactions can be sent. Since we used the Rinkeby testnet, their Ether can be obtained for free at the faucet website. Ether is used to pay transaction fees.(v)We can deploy smart contracts to store our credentials and names into them. For this purpose, we write our first smart contract in Solidity (Solidity is a high-level contract language that is planned to target the Ethereum Virtual Machine (EVM)) and deploy it through Mist. A simple example code ispragma solidity 0.4.2contract Test {string public ;string public ;function Test(string $$, string $−Z$){v1 =  $$;v2 =  $$;}}(vi)We can also see the option to watch previously deployed contracts and tokens. We can click on “Watch Contracts” at the bottom and enter the contract’s name and contract address.(1)Cost:All transactions need some amount of gas to motivate processing. A transaction fee is between 0 and 0.000 424 ETHER depending on how fast we want to approve the blockchain transaction.(2)Latency:Ethereum creates a new block every few seconds so that the data will appear on the blockchain instantly. As mentioned in Ethereum Blog, 10 confirmations are sufficient to achieve a similar security degree as that of 6 confirmations in Bitcoin. It takes around 3 minutes to confirm the transaction/data. Note that these costs and delays occur only once during the setup and registration phases.

5.3. Performance of the Authentication System

We now examine the performance of our anonymous authentication system. There are two main steps: the registration phase and the authentication phase. However, since time-critical operations in both registration and authentication phases are the same, we concentrate our evaluation on the efficiency of these processes. These processes include OPRF, issuing/receiving a credential, and proving knowledge of the signature and pseudonym. To simplify the evaluation criteria of the experiment results, we only assume a simple policy with a threshold for two agents. The experiment is based on a laptop with Intel Core i5-6200U CPU 2.30 GHz, 8.00 GB RAM, and 64-bit Ubuntu OS in Java 8, building upon the upb.crypto library (available at https://github.com/cryptimeleon) [65]. This library offers elliptic curve math and several useful building blocks for anonymous credentials like Sanders signatures [47], Pedersen’s commitment [58], Nguyen’s accumulator [66], Shamir secret sharing, generalized Schnorr protocols, proofs of partial knowledge [67], Damgård's technique for concurrently black-box secure Sigma protocols, and the Fiat-Shamir heuristic [56]. Table 2 shows the computational performances of the protocols over 50 iterations. For issuing and proving protocols in such a way that a certain policy is satisfied by a credential, we assume equality of two attributes as Policy: StuID = “11111” and GENDER = “male” and credential: certifying only these attributes.

5.4. Computational and Communication Complexity

We analyze the communication and the computation complexity of our proposed protocol using the size of each element exchange involved in our protocol, the number of exponentiation needed for issuing a credential (executed only once in the registration phase) and the proving of a credential (the most frequently executed phase), respectively. We show the following efficiency analysis in Table 3. , , , and denote the number of attributes that can be certified, the number of agents that need to be connected, the cost of exponentiation in , and the cost of a pairing computation, respectively. By (resp. ), we denote the cost of proving knowledge of secrets involved in a multi-exponentiation (resp. pairing-product) equation, and indicates the cost of verifying this proof.

5.5. Comparison

We provide a comparison of DAMFA with some of the most popular SSO schemes in Table 4. We compare DAMFA with the above schemes in terms of Decentralization (Decent.), Passive verification (PV), Multi-Factor (MF), Formal definitions (FD), Anonymity (Anony.), and Selective Disclosure (SD) attributes. Decent denotes the decentralization of the SSO schemes (i.e., user authentication process no longer depends on a trusted third party). We provide this by applying a distributed transaction ledger and the blind issuing protocol. PV shows that service providers can verify users (who have registered a particular credential) without requiring interaction with an identity provider. We fulfill this property using a distributed transaction ledger and anonymous credentials. Anonymity guarantees that no one can trace or learn information about the user’s identity during the authentication process. We fulfill this property by applying NIZNP + SP signature + Pseudonym. Here, denotes that it is unfeasible for IDP's to track users’ sign-on activity onto different SPs. Also, it shows that it is impossible to correlate multiple accounts created from the same credential on different SPs. Subsequently, indicates that either IDP's s or SPs can create a correlation between different accounts of the same user. FD demonstrates if proposed schemes provide a formal security definition. In this case, DAMFA is the only scheme that provides a formal security definition and proof. SD allows to disclose a subset of user attributes and proves statements about their attributes. Finally, to protect the user’s private information against offline (OA) attacks, we use the TOPRF primitive. Here, means that other related schemes are resistant against offline attacks as long as IDP does not compromise or the theft/loss/corruption of a user’s device does not happen when they use this device as 2FA token. means that resistance to offline attacks is satisfied even in the presence of a corrupted IDP or user’s device.

6. Conclusion

In this paper, we proposed a decentralized authentication and key exchange system DAMFA (SSO scheme) under TOPRF protocol and standard cryptographic primitives. The proposed scheme builds upon a trustworthy global append-only ledger that does not rely on a trusted server. DAMFA fulfills the following properties:(1)Decentralization property means that the process of user authentication no longer depends on a trusted party. To realize such a distributed ledger, we propose using the blockchain system already in real-world use with the cryptographic currency Bitcoin.(2)Passive verification means that service providers who have access to the shared ledger can verify users without requiring interaction with an identity provider.(3)Single sign-on property ensures that a user logs in with a single ID into the identity provider and then gains access to any of the several related systems. So, users do not need to register with each service provider individually.(4)Anonymity guarantees that no one can trace or learn information about the user’s identity during the authentication process. Finally, we evaluated that our protocol is efficient and practical for authentication systems.

Moreover, we provided comparison of our scheme (DAMFA) with some of the most prominent SSO schemes. To demonstrate a more detailed analysis of the performance of our scheme, we analyzed the communication and the computation complexity of our proposed protocol using the size of each element’s exchange involved in our protocol and the number of exponentiation, respectively. We proved our construction’s security via ideal-real simulation, showing the impossibility of offline dictionary attacks. Finally, we demonstrated that our protocol is efficient and practical through a prototypical implementation and implemented the public ledger using Ethereum and Namecoin blockchains.

Data Availability

No additional data are available.


This paper is an extended version of the paper entitled “DAMFA: Decentralized Anonymous Multi-Factor Authentication” [23], including complete proofs, formal security models, an Ethereum implementation, a comparison with other SSO schemes, a computation and communication complexity analysis, and improved experimental results.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was supported by Johannes Kepler Open Access Publishing Fund and has been carried out within the scope of Digidow, the Christian Doppler Laboratory for Private Digital Authentication in the Physical World. It has partially been supported by the LIT Secure and Correct Systems Lab. The authors are grateful for the financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, the Christian Doppler Research Association, 3 Banken IT GmbH, ekey biometric systems GmbH, Kepler Universitätsklinikum GmbH, NXP Semiconductors Austria GmbH and Co KG, Österreichische Staatsdruckerei GmbH, and the State of Upper Austria.