Abstract

Constrained verifiable random functions (VRFs) were introduced by Fuchsbauer. In a constrained VRF, one can drive a constrained key from the master secret key , where S is a subset of the domain. Using the constrained key , one can compute function values at points which are not in the set S. The security of constrained VRFs requires that the VRFs’ output should be indistinguishable from a random value in the range. They showed how to construct constrained VRFs for the bit-fixing class and the circuit constrained class based on multilinear maps. Their construction can only achieve selective security where an attacker must declare which point he will attack at the beginning of experiment. In this work, we propose a novel construction for constrained verifiable random function from bilinear maps and prove that it satisfies a new security definition which is stronger than the selective security. We call it semiadaptive security where the attacker is allowed to make the evaluation queries before it outputs the challenge point. It can immediately get that if a scheme satisfied semiadaptive security, and it must satisfy selective security.

1. Introduction

Pseudorandom functions (PRFs) are one of the basic concepts in modern cryptography, which were introduced by Goldreich et al. [1]. A PRF is an efficiently computable function . For a randomly chosen key , a polynomial probabilistic time (PPT) adversary cannot distinguish the outputs of the function for any from a randomly chosen values from .

Boneh and Waters [2] put forward the concept of PRFs and presented a new notion which was called constrained pseudorandom functions. A constrained PRF is the same as the standard PRF except that it is associated with a set . It contains a master key which can be used to evaluate all points that belonged to the domain . Given the master key and a set , it can generate a constrained key which can be used to evaluate for any . Pseudorandomness requires that given several constrained keys for sets and several function values at points which were chosen adaptively by the adversary, the adversary cannot distinguish a function value from a random value for all and . Constrained PRFs have been used to optimize the ciphertext length of broadcast encryption [2] and construct multiparty key exchange [3].

Verifiable random functions were introduced by Micali et al. [4]. A VRF is similar to a pseudorandom function. It also preserves the pseudorandomness that a PPT adversary cannot distinguish an evaluated value from a random value even if it is given several values at other points. A VRF has an additional property that the party holding the secret key is allowed to evaluate F on associated with a noninteractive proof. With the proof, anyone can verify the correctness of a given evaluation by the public key. In addition, the evaluation of should remain pseudorandomness and even an adversary can query values and proofs at other points. Lastly, the verification should remain sound even if the public key was computed maliciously. VRFs have been used to construct zero knowledge proofs [5], and electronic payment schemes [6], and so on.

In SCN 2014, Fuchsbauer [7] extended the notion of VRFs to a new notion, which was called constrained VRFs. In addition to three polynomial time algorithms: , and , they defined another algorithm , which was used to drive a constrained key. For constrained VRFs, it generates a pair key in the algorithm. Given a constrained key for a set , the algorithm computes a value associated with a prove π which can be used to verify the correctness of by the public key . A constrained VRF should satisfy the security notions of provability, uniqueness, and pseudorandomness. The pseudorandomness requires that the evaluation of should be indistinguishable from a random value, even if the adversary is given several constrained keys for subset and several function values associated with proofs at points , where and .

A possible application of constrained VRFs is micropayments [8]. Micropayment schemes emphasized the ability to make payments of small amounts. In the micropayment based on probability, a large number of users and merchants jointly select a user to pay the cheque. It can realize the micropayment of a large number of users to be converted into a macropayment of a certain user with a small probability. In this scheme, how do we decide which cheque C should be payable in fair way? Using the VRFs, merchant M publishes for VRF with range . Cheque C is payable if , where s is a known selection rate. However, it has a drawback which needs a public key infrastructure (PKI) for merchants’ keys . By the constrained VRFs, every merchant uses the same key . Merchant M gets constrained key for set , where is the identity of merchant M. Cheque C is payable if . Anybody can check the result by the same public key . Therefore, it does not need the PKI for merchants.

Fuchsbauer [7] gave two constructions from the multilinear maps based on constrained PRFs proposed by Boneh and Waters [2]. The first one is bit-fixing VRFs, in which the constrained keys can be derived for any set , where is described by a vector as the set of all strings such that it matches at all coordinates that are not . The second one is circuit constrained VRFs, in which the constrained keys can be derived for any set that is decidable by a polynomial size circuit.

However, Fuchsbauer’s constructions [7] can only achieve selective security—a weaker notion where the adversary must commit to a challenge point at the beginning of the experiment. By the technology of complexity leveraging, any selective security can be converted into adaptive security where the adversary can make its challenge query at any point. The reduction simply guesses beforehand which challenge value the adversary will query. Therefore, it leads to a security loss that is exponential in the input length. In this work, we attempt to ask an ambitious question: is it possible to construct a constrained VRF which satisfies a more stronger security compared with the selective security?

In this work, we propose a novel construction based on the bilinear maps. Inspired by the constrained PRFs of Hohenberber et al. [9], we construct a VRF with constrained keys for any sets of polynomial size and define a new security named semiadaptive security. It allows the adversary to query the evaluation oracle before it outputs a challenge point, while the public key is returned to the adversary associated with the challenge evaluation. This definition is stronger than the selective security, which can be verified easily.

Our scheme is derived from the constructions of constrained PRFs given by Hohenberger et al. [9]. It is defined over a bilinear group, which contains three groups with composite order , equipped with bilinear maps . The constrained VRFs map an input from to an element of . The secret key is a tuple , where , is an admissible hash function. VRFs are defined asassociated with a proofwhere is the th bit of .

In order to verify the correctness of evaluation, we define a public key as , where is an obfuscation of a circuit which takes a point x as input and outputs an element from . The verifier only needs to check and . The constrained key is an obfuscation of a circuit that has the secret key and the constrained set S hardwired in it. On input a value , it outputs . While this solution would work only if the obfuscator achieves a black box obfuscation definition [10], there is no reason to believe that an indistinguishability obfuscator would necessarily hide the secret key .

We solve this problem by a new technique which was introduced by Hohenberger [9]. We divide the domain into two disjoint sets by the admissible hash function: computable set and challenge set. The proportion of computable set in the domain is about , and the proportion of challenge set in the domain is about , where is the number of queries made by the adversary. In the evaluation queries before the adversary outputs the challenge point, we use the secret key to answer the evaluation query x and abort the experiment if x belonged to the challenge set. After the adversary outputs a challenge point , we use a freshly chosen secret key to answer the evaluation queries. Via a hybrid argument, we reduce weak Bilinear Diffie–Hellman Inversion (BDHI) assumption to the pseudorandomness of constrained VRFs.

1.1. Related Works

Lysyanskaya [11] gave a construction of VRFs in bilinear groups, but the size of proofs and keys is linear in input size, which may be undesirable in resource constrained user. Dodis and Yampolskiy [12] gave a simple and efficient construction of VRFs based on bilinear mapping. Their VRFs’ proofs and keys have constant size, but it is only suitable for small input spaces. Hohenberger and Waters [13] presented the first VRFs for exponentially large input spaces under a noninteractive assumption. Abdalla et al. [14] showed a relation between VRFs and identity-based key encapsulation mechanisms and proposed a new VRF-suitable identity-based key encapsulation mechanism from the decisional weak Bilinear Diffie–Hellman Inversion assumption.

Fuchsbauer et al. [15] studied the adaptive security of the GGM construction for constrained PRFs and gave a new reduction that only loses a quasipolynomial factor , where q is the number of adversary’s queries. Hofheinz et al. [16] gave a new constrained PRF construction for circuit that has polynomial reduction to indistinguishability obfuscation in the random oracle model.

Kiayias et al. [17] introduced a novel cryptographic primitive called delegatable pseudorandom function, which enables a proxy to evaluate a pseudorandom function on a strict subset of its domain using a trapdoor derived from the delegatable PRF’s secret key. Boyle et al. [18] introduced functional PRFs which can be seen as constrained PRFs. In functional PRFs, in addition to a master secret key, there are other secret keys for a function f, which allows one to evaluate the pseudorandom function on any y for which there exists an x such that . Chandran et al. [19] showed constructions of selectively secure constrained VRFs for the class of all polynomial-sized circuits.

Notations. In what follows, we will denote with a security parameter. We say is negligible if holds for all polynomials and all sufficiently large λ. Denote PPT as probabilistic polynomial time. Denote as the set .

2. Preliminaries

We first give a definition of admissible hash functions which is introduced by Boneh and Boyen [20].

Definition 1 (see [20]). Let and θ be efficiently computable univariate polynomials. An efficiently computable function and an efficient randomized algorithm are admissible if for any , define as follows: if for all and , else . For any efficiently computable polynomial , , where , we have thatwhere the probability is taken only over .
Next, we present the formal definition of indistinguishability obfuscation following the syntax of Garg et al. [21].

Definition 2 (indistinguishability obfuscation ()). A uniform PPT machine is called an indistinguishability obfuscator for a circuit class if the following holds:(i)Correctness: for all security parameters , for all , and for all inputs x, we have(ii)Indistinguishability: for any (not necessarily uniform) PPT distinguisher , there exists a negligible function such that the following holds: if , then

2.1. Assumptions

Let be a PPT group algorithm that takes a security parameter as input and outputs as tuple , in which p and q are independent uniform random bit primes. and are groups of order , is a bilinear map, and and are the subgroups of with the order p and q, respectively.

The subgroup decision assumption [22] in the bilinear group is said that the uniform distribution on is computationally indistinguishable from the uniform distribution on a subgroup of or .

Assumption 1 (subgroup hiding for composite order bilinear groups). Let and . Let if , else . The advantage of algorithm in solving the subgroup decision problem is defined asWe say that the subgroup decision problem is hard if for all PPT , is negligible in λ.

Assumption 2 (weak Bilinear Diffie–Hellman Inversion). Let , and . Let . Let if , else . The advantage of algorithm in solving the problem is defined asWe say that the weak bilinear Diffie–Hellman inversion problem is hard if for all PPT , is negligible in λ.
Chase et al. [22] showed that many type assumptions can be implied by subgroup hiding in bilinear groups of composite order.

3. Definition

We recall the definition of constrained VRFs which was given by Fuchsbauer [7].

Let be an efficiently computable function, where is the key space, is the input domain, and is the range. F is said to be constrained VRFs with regard to a set if there exists a constrained key space , a proof space and four algorithms(i) it is a PPT algorithm that takes the security parameter λ as input and outputs a pair of keys , a description of the key space , and a constrained key space (ii): this algorithm takes the secret key and a set as input and outputs a constrained key (iii) or this algorithm takes the constrained key and a value x as input and outputs a pair of a function value and a proof if , else outputs (iv) this algorithm takes the public key , an input x, a function value y, and a proof π as input and outputs a value in , where “1” indicates that

3.1. Provability

For all , , and , it holds that(i)If , then and (ii)If , then

3.2. Uniqueness

For all , and , one of the following conditions holds:(i),(ii) or(iii),which implies that for every x there is only one value y such that .

3.3. Pseudorandomness

We consider the following experiment for (i)The challenger first chooses and then generates by running the algorithm and returns to the adversary (ii)The challenger initializes two sets V and E and sets , where V will contain the points that the adversary cannot evaluate and E contains the points at which the adversary queries the evaluation oracle(iii)The adversary is given the following oracle:(1)Constrain: on input a set , if , return and set else return (2)Evaluation: given , return and set (3)Challenge: on input , if or , then it returns . Else, it returns if , or it returns a random value from if (iv) outputs a bit ; if the experiment outputs 1

A constrained VRF is pseudorandomness if for all PPT adversary , it holds that

3.4. Semiadaptive Security

We give a weak definition for pseudorandomness which is called semiadaptive security. It allows the adversary to query the evaluation before it outputs a challenge point, while the public key is returned to the adversary after the adversary commits a challenge point. In the selective security, the adversary must commit a challenge input at the beginning of the experiment. Therefore, we can find that if a scheme satisfies the semiadaptive security, it must satisfy selective security. Conversely, it may be not true.

3.5. Puncturable Verifiable Random Functions

Puncturable VRFs are a special class of constrained VRFs, in which the constrained set contains only one value, i.e., . The properties of provability, uniqueness, and pseudorandomness are similar to the constrained VRFs. To avoid repetition, we omit the formal definitions.

4. Construction

In this section, we give our construction for puncturable VRFs. A puncturable VRF consists of four algorithms . The input domain is where . The key space and range space are defined as a part of the setup algorithm.(i) On input the security parameter , run such that and are subgroups of . Let be polynomials such that there exists a admissible hash function .The key space is , the range is , and the proof space is . The setup algorithm chooses , and uniformly at random and sets . The public key contains an obfuscation of a circuit , where is described in Figure 1. Note that has hardwired in it. Set , where is padded to be of appropriate size. The puncturable VRF F is defined as follows. Let where . Then,(ii) This algorithm computes an obfuscation of a circuit which is defined in Figure 2. Note that has the secret key and the puncturable value hardwired in it. Set where is padded to be of appropriate size.(iii) or The punctured key is a program that takes an bit input x. We define(iv) To verify with regard to , compute and output 1 if the following equations are satisfied:

4.1. Properties
4.1.1. Provability

From the definition of F and P, we observe that for , , if

Therefore, we have When , we can get that . This completes the proof of provability.

4.1.2. Uniqueness

Consider a public key , where , and is described in Figure 1. Given a value and two pair values and that satisfy for . We show that .

From the verification equations, we observe that and . Because , then . Therefore, we can get that .

4.2. Proof of Pseudorandomness

In this section, we prove that our construction is secure puncturable VRFs as defined in Section 3.

Theorem 1. Assuming is a secure indistinguishability obfuscator and the subgroup hiding assumption for composite order bilinear groups holds, then our construction described as above satisfies the semiadaptive security as defined in Section 3.

Proof. To prove the above theorem, we first define a sequence of games where the first one is the original pseudorandomness security game and show that each adjacent games is computationally indistinguishable for any PPT adversary . Without loss of generality, we assume that the adversary makes evaluation queries before outputting the challenge point, where is a polynomial. We present a full description of each game and underline the changes from the present one to the previous one. Each such game is completely characterized by its key generation algorithm and its challenge answer. The differences between these games are summarized in Table 1.

4.2.1. Game 1

The first game is the original security for our construction. Here, the challenger first chooses a puncturable VRF key. Then, makes evaluation queries and finally outputs a challenge point. The challenger responds with either a PRF evaluation or a random value.(1)The challenger runs , chooses , and sets and Then, the challenger flips a coin .(2)The adversary makes a evaluation query . Then, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger computes and , sets and , and returns to the adversary .(5)The adversary outputs a bit and wins if .

4.2.2. Game 2

This game is the same as the Game 1 except that a partitioning game is simulated. If the undesirable partition is queried, we abort the game. The partition game is defined as follows: the challenger samples a string by the algorithm of admissible hash function and aborts if either there exists a evaluation query x such that or the challenge query such that .(1)The challenger runs , chooses , and sets and Then, the challenger flips a coin and runs .(2)The adversary makes a evaluation query . The challenger checks if (recall that if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger computes and , sets and , and returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 1. For any PPT adversary , if wins with advantage ϵ in Game 1, then it wins with advantage in Game 2.

Proof. The difference between Game 1 and Game 2 is that we add an abort condition in Game 2. From the admissible of hash function h, we can get that for all , . The two experiments are equal if Game 2 does not abort. Therefore, if wins with advantage ϵ in Game 1, then it wins with advantage at least in Game 2.

4.2.3. Game 3

This game is the same as the previous one except that the public key and the punctured key are obfuscation of two other circuits defined in Figures 3 and 4, respectively. On inputs x such that , the public key and the punctured key use the same secret key as before. However, if , the public key and the punctured key use a different secret key which is randomly chosen from the key space. The detailed description is given as follows:(1)The challenger runs , chooses , and sets Then, the challenger flips a coin and runs .(2)The adversary makes a evaluation query . The challenger checks if (recall that if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger chooses , sets , computes and , and sets and . Then, it returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 2. Assuming is a secure indistinguishability obfuscator and the assumption 1 holds, Game 2 and Game 3 are computationally indistinguishable.
This proof is given in Section 4.3.

4.2.4. Game 4

This game is the same as the previous one except that the generation way of secret key is different. We make some elements of secret key to contain a factor a, for use on inputs x where . The detailed description is given as follows:(1)The challenger runs , chooses , and sets and Then, the challenger flips a coin and runs .(2)The adversary makes a evaluation query . The challenger checks if (recall that if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger chooses , . Set , if , else . Let , compute , , and and set and . Then, it returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 3. The outputs of Game 3 and Game 4 are statistically indistinguishable.

Proof. Recall that the difference between Game 3 and Game 4 is the manner in which are chosen. In Game 3, are chosen randomly from , while in Game 4, the challenger first chooses and and sets , if , else . Since which is invertible with overwhelming probability, is a uniform element in . Hence, the two experiments are statistically indistinguishable.

4.2.5. Game 5

This game is the same as the previous one except that the hardwire of circuits and is changed. The two circuits contain some constants . When , the related function values are computed using the constants . The detailed description is given as follows:(1)The challenger runs , chooses , and , and sets and Then, the challenger flips a coin and runs .(2)The adversary makes an evaluation query . The challenger checks if (recall that if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger chooses . Set . Let , compute , , , and . Set and , where the descriptions of and are given in Figures 5 and 6, respectively. Then, it returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 4. Assuming is a secure indistinguishability obfuscator, Game 4 and Game 5 are computationally indistinguishable.

Proof. We will introduce an intermediate experiment and prove that Game 4 and are computationally indistinguishable and Game and Game 5 are computationally indistinguishable.
The experiment is the same as Game 5 except that is generated by obfuscating the circuit in Step 4. Assume that there exists a PPT adversary that distinguishes the outputs of Game 4 and Game , we construct a PPT adversary that breaks the security with the same probability. runs Step 1 and Step 3 as in experiment 4. If the experiment does not abort, chooses values to construct the circuits: . Set , if , else . Let , , and . constructs circuits and , where is replaced by . Then, he sends to the challenger and gets . computes , and returns to the adversary . outputs , if and outputs 0, else outputs 1. The circuits and have identical functionality. We observe that for any string x, :(i)For any such that , both circuits output (ii)For any such that , we have where (iii)For any such that , we have Therefore, if there exists an adversary that distinguishes the outputs of Game 4 and Game with advantage ϵ, then there exists an adversary that breaks the security of with the same advantage.
The indistinguishability of Game and Game 5 is similar to the previous one (Game 4 and Game ).

4.2.6. Game 6

This game is the same as the previous one except that is replaced by a random element from . Formally, the challenger chooses a random element , and uses to replace .

Lemma 5. If there exists an adversary that distinguishes Game 5 and Game 6 with advantage ϵ, then there exists an adversary that breaks assumption 2 with advantage ϵ.

Proof. We observe that the difference between Game 5 and Game 6 is that the element in Game 5 is replaced by a random element in Game 6. receives an instance , where T is either equal to or a random element of . Then, simulates Game 5 except that . outputs ,. If , outputs 0, which indicates that ; else, outputs 1, which implies that T is a random element from .
We observe that both and are chosen randomly from in Game 6. This completes the proof of Theorem 1.

4.3. Proof of Lemma 2

The major difference between Game 2 and Game 3 is the ‘challenge partition’ inputs x where . Therefore, in order to show that for any PPT adversary , the outputs of Game 2 and Game 3 are indistinguishable; we give a sequence of subexperiments Game to Game and prove that any PPT attacker’s advantage in each game must be negligible close to the previous one. We omit the previous experiment Game 2 and describe the intermediate experiments. In the first game, we change the secret key such that the circuit computes the output in a different manner and the output is the same as in the original circuit. Next, using the weak bilinear Diffie–Hellman inversion assumption, we modify the constants hardwired in the program such that the output of all challenge partition inputs is changed. Essentially, a different base for the challenge partition is used in the two programs, respectively. Finally, using Subgroup Hiding Assumption and Chinese Remainder Theorem, we can change the exponents for the challenge partition and ensure that the original circuit (in Game 2) and final circuit (in Game 3) use different secret keys for the challenge partition.

4.3.1. Game

In this game, we modify the secret key for . It is easy to verify that the two experiments are statistically indistinguishable. The detailed description is given as follows:(1)The challenger flips a coin and runs . Then, the challenger runs , chooses , and , sets if , else , and and (2)The adversary makes an evaluation query . The challenger checks if (recall that , if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger computes and , sets and , and returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 6. The outputs of Game 2 and Game are statistically indistinguishable.

Proof. We observe the difference between Game 2 and Game is the manner in which are chosen. In Game 2, are chosen randomly from , while in Game , the challenger first chooses and and sets , if , else . Since , which is invertible with overwhelming probability, is a uniformly random element in . Therefore, the two experiments are statistically indistinguishable.

4.3.2. Game

This game is the same as the previous one except the hardwire of the circuit is changed. The domain is divided into two disjoint sets by the admissible hash function. When , all elements used to compute function values y contain a factor a. Therefore, the related function values can be computed by . On the other hand, only some elements used to compute function values y contain the factor a when . Therefore, the related function values can only be computed by .(1)The challenger flips a coin and runs . Let . Then, the challenger runs , chooses , and , sets if , else , and , , and , where the description of is given in Figure 7.(2)The adversary makes an evaluation query . The challenger checks if (recall that , if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger computes and , sets and , and returns to the adversary , where the description of is given in Figure 8.(5)The adversary outputs a bit and wins if .

Lemma 7. Assuming is a secure indistinguishability obfuscator, Game and Game are computationally indistinguishable.

Proof. The proof method is similar to Lemma 4.

4.3.3. Game

This game is the same as the previous one except that is chosen randomly from in Step 1, and in Step 4.

Lemma 8. If there exists an adversary that distinguishes the Game and Game with advantage ϵ, then there exists an adversary that breaks assumption 2 with advantage ϵ.

Proof. We observe that the difference between Game and Game is that the term is replaced by a random element of . This proof is similar to the proof of Lemma 5.

4.3.4. Game

This game is the same as the previous one except that is chosen randomly from the subgroup , and is chosen randomly from the subgroup in Step 1.

Lemma 9. Assuming assumption 1 holds, Game and Game are computationally indistinguishable.

Proof. We introduce an intermediate experiment and show that Game and are computationally indistinguishable. Similarly, Game and Game are computationally indistinguishable.
Game is the same as Game except that v is chosen from . Suppose that there exists an adversary which can distinguish Game and , we construct an adversary that breaks assumption 1. receives , where or . sets , chooses , and , and computes as in Game . Then, runs the rest steps as in Game . At last, outputs , if and guesses , else guesses . Note that simulates exactly Game when , and simulates exactly Game when . Therefore, if there exists an adversary that distinguishes the outputs and with advantage ϵ, then there exists an adversary that breaks the assumption 1.

4.3.5. Game

This game is the same as the previous one except that the secret key is divided into two parts and . If , the related function values are computed by . Else, the related function values are computed by .(1)The challenger flips a coin and runs . Let . Then, the challenger runs , chooses , and , and sets , and (2)The adversary makes an evaluation query . The challenger checks if (recall that if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger computes and , sets and , and returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 10. Assuming is a secure indistinguishability obfuscator, Game and Game are computationally indistinguishable.

Proof. We introduce two intermediate experiments and and show that Game and are computationally indistinguishable, Game and Game are computationally indistinguishable, and Game and Game are computationally indistinguishable.
First, we define the experiments and . In experiment , the challenger samples as in Game , sets if , else . Then, it answers the evaluation queries of exactly as in Game . For the challenge queries, it sets and computes . The public key is computed by the circuit , where and . The constrained key is computed as in Game .
Game is the same as the game , except that the constrained key is computed by the circuit .

Claim 1. Assuming is a secure indistinguishability obfuscator, Game and Game are computationally indistinguishable.

Proof. We construct a PPT adversary that uses to break the security of . runs Step 1 and Step 3 as in Game . On receiving the challenge point , sets and as in and constructs circuits and . Then, he sends to the challenger and receives . computes as in Game and sends to . returns , if , outputs 0, else outputs 1.
Next, we only show that the circuit and have the identical functionality. For any such that . For any such that . Therefore, the two circuits are functionally equivalent. Hence, if there exists an adversary that can distinguish the two games, then we can construct an adversary that breaks the security.

Claim 2. Assuming is a secure indistinguishability obfuscator, Game and Game are computationally indistinguishable.

Proof. The proof method is similar to the previous one.

Claim 3. Game and Game are statistically indistinguishable.

Proof. The difference between Game and Game is the chosen way of . In addition, is replaced by the value in Game . Since , a is invertible with overwhelming probability. Therefore, is a uniform element from and is also a uniformly random element from in Game . It follows that the two experiments are statistically indistinguishable.

4.3.6. Game

This game is the same as the previous one except that the generation way of secret key is different to the one of .(1)The challenger flips a coin and runs . Let . Then, the challenger runs , chooses , , and , and sets , , and (2)The adversary makes an evaluation query . The challenger checks if (recall that if for all ). If not, the game aborts. Else, the challenger computes and outputs .(3)The adversary sends a challenge point such that for all .(4)The challenger checks if . If not, the game aborts. Else, the challenger computes and , sets and , and returns to the adversary .(5)The adversary outputs a bit and wins if .

Lemma 11. Game and Game are statistically indistinguishable.

Proof. The only difference between Game and Game is the chosen way of the secret key and . In Game , the challenger chooses and sets and . While the challenger chooses and sets and in Game . Using Chinese remainder theorem, the distributions and are statistically indistinguishable. Therefore, Game and Game are statistically indistinguishable.

Lemma 12. Assuming assumption 1 holds, Game and Game 3 are computationally indistinguishable.

Proof. The proof method is similar to Lemma 9.

5. Constrained Verifiable Random Function

In this section, we give our construction of constrained verifiable random function with polynomial size of the constrained set. We embed the puncturable VRFs in the constrained VRFs. Informally, our algorithm works as follows: The setup algorithm is the same as the puncturable VRFs. The constrained key for the subset S is a circuit which has the secret key hardwired in it. On input a value x, the circuit computes the function value and proof by the puncturable VRFs if . The verifiable algorithm is the same as the puncturable VRFs. When proving the pseudorandomness, we translate puncturable VRFs into constrained VRFs with polynomial size of the constrained set by means of hybrid argument. Once the adversary queries the constrained key for the polynomial set , the challenger can guess the challenge point with a probability of . Subsequently, the secret key can be replaced by a constrained key of puncturable VRFs. Via a hybrid argument, we reduce pseudorandomness of puncturable VRFs to the pseudorandomness of constrained VRFs.

Let be a puncturable VRF , and be a function of generation proof. We construct constrained VRFs by invoking the puncturable VRFs:(i) Run the algorithm . Set and .(ii) This algorithm takes the secret key and the constrained set S as inputs, where and computes an obfuscation of a circuit defined as in Figure 9. has the secret key, the function descriptions F and P, and the constrained set S hardwired in it. Sets where is padded to be of appropriate size.(iii) or The constrained key is a program that takes x as the input. Define .(iv) This algorithm is the same as

The provability and uniqueness follow from the puncturable VRFs. We omit the detailed description. Next, we show that this construction satisfies the pseudorandomness defined in Section 3.

Theorem 2. Assuming is a secure indistinguishability obfuscator and is a secure puncturable VRF. Then, the construction defined above satisfies the pseudorandomness.

Proof. Without loss of generality, we assume the adversary makes evaluation queries and constrained queries. We present a full description of each game and underline the changes from the presented one to the previous one.

5.1. Game 1

The first game is the original security game for our construction. Here, the challenger first chooses a pair constrained VRF key . Then, makes evaluation queries and constrained key queries and outputs a challenge point. The challenger responds with either a VRF evaluation or a random element.(i)The challenger chooses and then generates by running the algorithm (ii)The adversary makes evaluation queries or constrained queries:(1)If sends an evaluation query , then output (2)If sends a constrained key query for , output the constrained key (iii) sends a challenge query such that for all and for all . Then, the challenger sets and outputs (iv) outputs and wins if

5.2. Game 2

This game is the same as the previous one except that we introduce an abort condition. When the adversary makes the first constrained query , the challenger guesses a challenge query . If the last queries does not contain , the experiment aborts. In addition, the experiment aborts if , where is the challenge query.(i)The challenger chooses and then generates by running the algorithm .(ii)The adversary makes evaluation queries or constrained queries: For the first constrained query , the challenger chooses and output . For all evaluation queries before the first constrained query, the challenger outputs . For all queries after the first constrained query, the challenger does as follows:(1)If sends an evaluation query such that , the experiment aborts. Else, if , output (2)If sends a constrained key query for such that , the experiment aborts. Else, output .(iii) sends a challenge query such that for all , and for all . If , the experiment aborts. Else, the challenger sets and outputs .(iv) outputs and wins if .

Lemma 13. For any PPT adversary , if wins with advantage ϵ in Game 1, then it wins with advantage in Game 2.

Proof. According the pseudorandomness defined in Section 3, the challenge point belongs to the constrained set. The two experiments are equal if Game 2 does not abort. Since the challenger guesses correctly with probability , if wins with advantage ϵ in Game 1, then it wins with advantage in Game 2.

5.3. Game

For , the experiment is the same as the previous one except that the constrained queries use instead of in the first i experiment. We observe that the Game is equal to the Game 2.(i)The challenger chooses and then generates by running the algorithm .(ii)The adversary makes evaluation queries or constrained queries: For the first constrained query . The challenger chooses , computes , and outputs , where the description of the circuit is given in Figure 10. For all evaluation queries before the first constrained query, the challenger outputs . For all queries after the first constrained query, the challenger does as follows:(1)If sends an evaluation query such that , the experiment aborts. Else, if , output (2)If sends a constrained key query for such that , the experiment aborts. Else, if output , else output , where the description of the circuit is given in Figure 10.(iii) sends a challenge query such that for all and for all . If , the experiment aborts. Else, the challenger sets , computes, and outputs .(iv) outputs and wins if .

Lemma 14. Assuming is a secure indistinguishability obfuscator, Game and are computationally indistinguishable.

Proof. We observe that the difference between Game and respond to the th constrained query. In Game , , while in Game , . In order to prove that the two games are indistinguishable, we only need to show that the circuit and are functionally identical.(i)If , both circuits output (ii)For any input , Therefore, by the security of , the two experiments are indistinguishable.

5.4. Game 3

This game is the same as game except that is replaced by a random element from .(i)The challenger chooses and then generates by running the algorithm .(ii)The adversary makes evaluation queries or constrained queries: For the first constrained query , the challenger chooses and output . For all evaluation queries before the first constrained query, the challenger outputs . For all queries after the first constrained query, the challenger does as follows:(1)If sends an evaluation query such that , the experiment aborts. Else, if , output (2)If sends a constrained key query for such that , the experiment aborts. Else, output (iii) sends a challenge query such that for all and for all . If , the experiment aborts. Else, the challenger sets and and outputs .(iv) outputs and wins if .

Lemma 15. Assuming the puncturable VRFs are secure, Game and Game 3 are computationally indistinguishable.

Proof. We prove that if there exists an adversary that distinguishes the Game and Game 3, then there exists another adversary that breaks the security of puncturable VRFs.
can simulate perfect experiment for . For each evaluation query x before the first constrained key query, sends x to the puncturable challenger and returns to . When queries the constrained key , chooses , sends to the challenger, and receives . Then, uses to respond the remaining queries. On receiving the challenge input , checks and outputs y. outputs the response of . We observe that if y is chosen randomly, then simulates Game 3, else it simulates Game . Therefore, Game and Game 3 are computationally indistinguishable.
We observe that both and are chosen randomly from . Therefore, for any PPT adversary , it has negligible advantage in Game 3. This completes the proof of Theorem 2.

6. Conclusion

In this work, we construct a novel constrained VRF for polynomial size set and give the proof of security under a new secure definition which is called semiadaptive security. Meanwhile, our construction is based on bilinear maps, which avoid the application of multilinear maps. Although it does not satisfy full adaptive security, it has solved some problems compared with selective security, which allows the adversary to query the evaluation oracle before it outputs the challenge point. To construct a fully adaptive security constrained VRFs is our future work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 61871430 and 61971458), Science and Technology Project of Henan Educational Department (no. 20A520012), and Special Project of Key Scientific Research Projects of Henan Higher Education Institutions (no. 19zx010).