Abstract
In the research of searchable encryption, fine-grained data authorization is a convenient way to manage the search rights for users. Recently, Liu et al. proposed a fine-grained searchable scheme with verification, which can control the search authorization and verify the results. In this paper, we first present a forgery attack against Liu et al.’s scheme and then propose a novel scheme of verifiable data search with fine-grained authorization in edge environment. Based on the key aggregate mechanism and Merkle hash tree, our proposed scheme not only achieves file-oriented search permission management but also implements the correctness and completeness verification of search results. In addition, with the assistance of edge server, resource-constrained users can easily perform the tasks of search and verification. Finally, we prove our scheme is secure based on the decision -bilinear Diffie–Hellman exponent problem. The performance analysis and experiment results demonstrate that our proposed scheme has lower computation, communication, and storage costs contrast to the existing schemes.
1. Introduction
With the high growth of Internet technique, cloud storage and computing services have been used extensively to business and individuals [1]. While cloud services bring convenience to people, there are still many problems to be solved, such as the security and retrievability of data [2]. To satisfy these requirements, the primitive of searchable encryption (SE) [3, 4] is proposed; as a promising technology, SE allows users to search encrypted data while protecting the privacy. Traditional SE schemes are always deployed in cloud environment (such as [5–7]), which are more suitable for the users of personal computer, that the clients can deal with computationally intensive works, such as encrypting and decrypting files and configuring a large number of attributes.
In the era of mobile Internet, terminal users are developing towards diversification. In addition to computer, more and more people are using mobiles, tablets, wearables, and other devices to perceive and receive data. Constrained by limited resources, these devices cannot bear complex computations and tasks. Therefore, it is urgent to find a more friendly environment for resource-constrained terminals. Recently, edge computing has been proposed as a new paradigm [8, 9]. It configures the storage, computing, and network devices between the users and cloud, which assists users to complete tedious tasks, or sink the cloud service functions to a favorable position, providing the real-time data processing and intelligent analysis nearby. Apparently, edge computing can not only reduce the burden of terminals and decrease the service response latency but also avoid the congestion of core network. Therefore, deploying SE technology in edge environment may have wider applications.
SE technology includes symmetric encryption retrieval [3] and public key encryption retrieval [4], in which the public key encryption retrieval is mostly utilized for multiple users. In the multiuser scenario, users can share data and cooperate and communicate with each other, which is suitable for most mobile applications. For instance, in a music sharing platform, the user who publishes data is called publisher, and the user who purchases and applies data is called data user. In this system, the data users can buy the music content from publisher, and the users change dynamically, so does the purchased content. In previous studies [10–12], data authorization is user-oriented, where the users can be authorized or revoked completely. In practical applications, more fine-grained management of data authorization is required. For example, when a user purchases new content or the purchased content expires, then the publisher must distribute a new authorization or revoke the previous search rights. Therefore, file-oriented search permission management should be introduced.
In an outsourced storage environment, sometimes the user needs to deal with a malicious server. For example, the server storage data is corrupted, or the server saves computing resources during the peak time. In these situations, the server may not want to process the whole database when responding to queries. From the perspective of users, they pay for the data and then always expect to receive guaranteed services. Therefore, the function of data verification for user is required when necessary. Recently, some schemes (e.g., [13, 14]) have been proposed to verify the search results. However, these schemes can only verify the correctness of return data but not the completeness. If the cloud returns an insufficient number of files, the user cannot discover it. Later, Liu et al. [15] proposed a searchable scheme to verify the completeness of search results. Unfortunately, this scheme involves forgery attack so that the users cannot correctly verify the completeness of search results.
In this paper, we propose an efficient verifiable data search with file-oriented authorization scheme in edge computing, in which the publishers can distribute fine-grained search permissions, and the data users can search the favorite data and verify the correctness and completeness of results with the assistance of edge server. Our contributions are as follows:(i)We analyze and show an attack against Liu et al.’s scheme [15].(ii)We propose a novel privacy-preserving file-oriented search scheme in edge computing. Based on the key aggregate mechanism, our proposed scheme implements fine-grained authorization and facilitates the distribution of massive data search rights. Through the design of Merkle hash tree, it achieves the correctness and completeness verification of search results. In addition, with the assistance of edge server, the resource-constrained users can easily query the data and verify the search results.(iii)We optimize the costs of the proposed scheme. In the keyword ciphertext process, it uploads one Bloom filter value instead of all the encrypted keywords, and then the communication and storage costs are related to the number of files, instead of the keyword number.(iv)Based on the decision -BDHE problem, we prove our scheme can meet the secure features. Performance evaluation and experiments demonstrate that our scheme is more practical and efficient than the available schemes.
The rest paper is organized into seven sections. The related studies and reviews are described in Section 2. The relevant knowledge is introduced in Section 3. Section 4 discusses the attack of Liu et al.’s scheme. Section 5 states the details of our proposed scheme. Then we show the requirements analysis in Section 6 and the performance evaluation in Section 7. The last section is a summary.
2. Related Work
To address the issues of data searchability and privacy preserving, Song et al. [3] proposed the primitive of searchable symmetric encryption, which can search the data with encryption form. Later, Boneh et al. [4] proposed the Public Key Searchable Encryption (PKSE) and applied it to mail system. Since then, PKSE has become a research hotspot, and many solutions have been proposed such as the proxy reencryption PKSE [16], attribute-based PKSE [17], certificateless PKSE [18], and PKSE based on primes [19]. However, these schemes are suitable for the personal computer clients, which are computation-intensive and unfriendly to resource-constrained terminals.
To reduce the computing and storage overhead in the client side, Guo et al. [20] proposed a keyword search encryption framework under the edge environment, which offloads the computation-intensive tasks of the sensor to the edge server. However, they only propose a framework without a specific implementation. Chen et al. [21] presented a privacy-preserving searchable encryption scheme in the edge computing, which designs an S-HashMap index structure and supports the fuzzy search of multikeywords. Yet in their scheme, the user needs to calculate all keyword indexes and generate index access tree, which requires a large amount of computation. Scheme [22] puts forward a PKSE solution based on the witness system in cloud-edge computing to resist the keyword guessing attack, while the scheme can only resist the external attackers but not the internal attackers like the curious cloud server. Wang et al. [23] proposed an image retrieval scheme with mobile edge computing, which introduced a cloud-guided image feature extraction method, thus reducing network traffic and improving retrieval accuracy, but this solution cannot be applied to keyword search scenario. Scheme [24] proposed a user-centered data search framework, which uses the edge computing to make intelligent prediction of the user’s search pattern, tailoring the search space so as to reduce the processing time. However, the framework is user-centered and is not suitable for the numerous files sharing scenario.
In the scenes of numerous searchable files sharing (such as [25–27]), the data owner can distribute the file-oriented retrieval rights to other users. In these schemes, the data owner uses different keys to encrypt different documents and shares the corresponding keys to the legitimate user. Obviously, such a key management mechanism has high cost in aspects of communication and storage, and the size of search token increases with the number of share files. Scheme [28] introduced a proxy server to transform the user’s search token into a instance, so as to reduce the key cost of user side. However, this method does not fundamentally solve the problem of inefficient key management. To solve the key management overhead problem, Cui et al. [29] presented a key aggregate searchable encryption in the cloud storage environment. In this scheme, the client can search for multiple files with one single key shared by the data owner. However, Zhou et al. [30] referred that the scheme [29] suffered from keyword guessing attack. Later, Li et al. [31] proposed a multiowner key aggregate searchable encryption scheme that the user can submit one single trapdoor to search across multiple data owners’ file records. Yet the scheme does not consider the dynamic search right management.
For the verification mechanism, Zhang et al. [32] proposed that a scheme can verify the single keyword search ranking results of the data uploaded by multiple users. However, all the users need to share the sorting data interactively to obtain the final ranking order, which reduces the practicability. Jianfeng Wang et al. [33] proposed a method to verify the query results of multiple keywords, which is suitable for the large scalable database, but this scheme uses an accumulator to implement verification and makes it difficult for mobile clients to bear the computing overhead. Recently, schemes [13, 14] have been proposed to verify the aggregate search results, but these schemes can only verify the correctness but not the completeness. A scheme [15] is proposed which can verify the completeness of results; however, it suffers from forgery attack so that the users cannot correctly verify the completeness of search results.
3. Preliminaries
3.1. Bilinear Pairing
Let two multiplicative cyclic groups and have the identical order , bilinear pairing is a map: which has the attributes as follows:(1)Bilinearity: For and , we have (2)Computability: There is an efficiency algorithm to calculate for any (3)Nondegeneracy: Let be a generator of , then
3.2. Merkle Hash Tree
The Merkle hash tree (MHT) is a data structure based on the hash function which is a hash binary tree. In MHT, each leaf node stores the hash value of the data block, and each nonleaf node’s value is calculated by hashing its children.
In MHT construction, we first calculate the hash value of each data, and then fill them as the lowest leaf nodes. Next, we establish the upper layer, and the value of each node in this layer is calculated by hashing its left and right children. Then we continuously establish the upper layer until the root.
MHT is recursively calculated layer by layer through hash computation, while hash is a one-way function; that is, the value of the parent node can only be calculated by its children, and the value of the child node cannot be deduced from the parent node. Therefore, when the value of the root node is determined, the correctness of all other nodes’ value can be guaranteed, namely the change of any node’s value will cause the value of root node to be different.
For example, a data set is input, then the output of is the root node’s value , then we show the construction in Figure 1. First, we compute the hash value for each leaf node, then calculate the nonleaf nodes’ values and , and at last, we output the root node’ value .

3.3. Bloom Filter
Bloom filter is a data structure used to represent collections, and it has three operations as follows:(1): This operation creates an empty Bloom filter, which is a -bit array with value 0(2): This operation adds an element to the Bloom filter . It first hashes the element with functions as , and then sets the -th bit value to 1(3): This operation queries whether the element is a membership data. It first computes the hash functions to , and then checks whether all the corresponding bits are equal to 1. If it holds true, is a membership data; otherwise, is not in the collection.
3.4. Keyed Hash Function
The keyed hash function is a verification and encryption mechanism used by communication entities, and it ensures the integrity and confidentiality of message data, and its security depends on the hash function. The keyed hash function inputs a message and a key and then outputs a hash value used for data authentication and integrity verification.
In this paper, we use the keyed hash function HMAC [34] to process keywords, and the calculation method is as follows:where is a message, is the key which is a 64 bits string, is a hash function, and are strings composed of several “” and “,” respectively, represents XOR operation, and represents join operation.
3.5. Complexity Assumption
The complexity assumption is defined as follows:(1)Decision -bilinear Diffie–Hellman exponent (Decision -BDHE) problem: The problem [35] in group is worked as below. Given a vector of elements as input, where , , and for , note that is missing, and had not given. Then decide if or is a random value in . For a polynomial time adversary Ad, the advantages obtained from the problem are defined as(2)Decision -BDHE assumption: For the adversary Ad with any polynomial time , the advantage to Decision -BDHE problem is defined as , then we say that the -BDHE problem is difficult to be solved.
4. Discuss of Liu et al.’s Scheme
4.1. Liu et al.’s Scheme
We simply describe Liu et al.’s scheme [15] as follows:(1)Init : The system initializes parameters described below.(a)Generates a bilinear map group system and sets as the maximum number of documents (b)Randomly picks and and computes , for (c)Selects one-way hash functions , and (2)KeyGen: The data owner chooses a random and sets the private key and public key (3)Encrypt : For the -th document , the data owner does those as given below.(a)Generates a Bloom filter for this document’s keyword set by computing (b)Picks a random , and then computes the ciphertexts and sends to cloud: and (4)Share : For subset , the data owner computes and sends to user(5)Trapdoor : The user generates the trapdoor and sends to the cloud(6)Retrieve :(a)The cloud tests the trapdoor for all files in , and (b)The cloud generates the verification proofs as(7)Verify : The user verifies the results as follows:
4.2. Analysis
Liu et al.’s scheme realizes that an authorized user can retrieve multiple encrypted files by one shared key and verifies the completeness of search results, but we find that the user cannot correctly verify the completeness of search results. In their scheme, the verification proof is calculated and transmitted by the cloud, and then it has the opportunity to forge the proof, which the users could not detect.
Let us look at a specific case. Suppose authorized subset , with the user queries keyword , then the honest cloud will return the complete search results , and the verification proofs are
For each (3), subsequently the user could check whether the return files are correct when receive the proofs (4).
However, if the cloud forges the proofs, we getand returns the result to the user, which can also pass the user’s verify algorithm, where is a fake value as long as it is not equal to , and is the file set lost by the cloud.
For each , the user computesand verifies
Because is forged by the cloud server, and will be an incorrect value, so (4) would not hold true. While for each , is correct, so (4) holds true. Thus, the user believes that the search result is complete.
Therefore, Liu et al.’s scheme cannot correctly verify the completeness of search results.
5. System Model and Definitions
In this section, we first present the model and definition, then describe the requirements.
5.1. System Model
The system model is shown in Figure 2, which contains four entities: publisher, cloud server, edge server, and data user.(i)Publisher. When the publisher has the shared data, it encrypts the data and outsources to the cloud. After the purchase action, the publisher distributes the corresponding keys to user and edge for the data query and verification.(ii)Cloud Server. The cloud stores the uploading data and responds on the query. The cloud is not fully trustworthy.(iii)Edge Server. The edge assists users with the query and verification operation. It is trusted and like a front-end server on the user side.(iv)Data User. The users purchase data and obtain the search authorization, who can query data with the assistance of edge server.

5.2. Formal Definition
(1)Init : This algorithm is operated by the publisher. The algorithm inputs a security parameter and the maximum file number and then outputs the system public parameters (2)KeyGen : This algorithm is run by the publisher. The algorithm inputs the parameters and then outputs hash key , publisher’s secret key , and public key (3)Encrypt : This algorithm is run by the publisher, and it inputs parameters , public key , and keyword set and then outputs the ciphertext set , Bloom filter value set , and the root node value of a Merkle hash tree (4)Authorization : This algorithm is run by the publisher. The algorithm inputs parameters , secret key , user’s public identity , and a subset and then outputs the authorization key and identity key (5)Trapdoor : The algorithm is operated by the data user and edge server. It takes the authorization key , hash key , keyword , and identity key as the input and then outputs the trapdoor (6)Search : This algorithm inputs parameters , the ciphertext set , Bloom filter value set , and trapdoor and then outputs the search result and verification proof (7)Verification : This algorithm takes public parameters , search result , verification proof , and publisher’s public key as the input and then outputs 1 if the result is correct and complete; otherwise, it outputs 0
Definition 1. Our proposed scheme includes the below seven algorithms.
5.3. Requirements
5.3.1. Security
In our proposed scheme, the system should satisfy indistinguishability against selective-file chosen keyword attack (IND-SF-CKA) security [30].
We used a game between a Challenger Cha and a polynomial-time Adversary Ad to define the model of IND-SF-CKA.(1)Initial. Ad publishes the file set to be attacked.(2)Setup. Cha builds the system and generates the public parameter and then sends the parameters and keyword space to Ad.(3)Process 1. Ad carries out a series of Authorized and Trapdoor queries as follows: Authorized query: Ad sends any file set to Cha which cannot have any intersection with and receives the keys computed by Cha through the algorithm Authorize . Trapdoor query: Ad can adaptively enquire to Cha with any keyword in and then Cha runs the algorithm Trapdoor to generate the trapdoor and sends it to Ad.(4)Challenge. When Ad wants to finish Process 1, it generates two keywords of the same length in plain text . Cha randomly selects and and then operates the Encrypt algorithm to output the challenge ciphertext and sends to Ad.(5)Process 2. Ad continuously sends the queries of Authorize and Trapdoor algorithms as in Process 1, the limit is the Authorize queries of cannot have any intersection with , and the Trapdoor query of or cannot be in .(6)Guess. Ad guesses the value of and outputs . Ad wins the game if . Set that Ad’s advantages over this game is
Definition 2. The proposed scheme is IND-SF-CKA secure if is negligible.
5.3.2. Consistency
Besides the security, the proposed scheme should satisfy the consistency when using the trapdoor for query [36].
Definition 3. For all distinct keywords , the trapdoor with and the cipher text for encrypting is consistent if and only if the following probability holds true.
T algorithm Search is always rejected when the keyword contained in Trapdoor is different from the keyword in Encrypt.
5.3.3. Correctness and Completeness
The proposed scheme should satisfy that the search result is correct and complete, which is defined as follows based on the definition in [37].(1)Correctness: (2)Completeness: , where
Definition 4. For authorized file set and a keyword for the query, the search result with is correct and complete when the below two terms are true.
The correctness condition ensures that the search result contains keyword , and the completeness condition guarantees the search result which includes all files containing keyword . When the search result is correct and complete, the Verification algorithm would output 1; otherwise, it would output 0.
5.4. The Proposed Scheme
5.4.1. Overview
In scheme [15], the fundamental reason why the cloud can launch forgery attacks is that the verification process requires cloud to participate in calculation, and the flow is “publisher computing - > cloud computing - > user computing, so the users will not know whether the cloud has forged proof. In our design, we changed the verification flow to “publisher computing - > public proof - > user computing,” And the cloud does not participate in computing but only forwards the data. Besides, in the phase of “public proof,” we use the Merkle hash tree to ensure that the proof is not tampered with by the cloud, thus guaranteeing the correctness and completeness of search data.
Secondly, in the index encryption phase of existing researches [13–15, 29–31], they encrypt every keyword associated with each file and then upload them. Consequently, the search function needs to match whether the query value is equal to each keyword, which will greatly affect the search efficiency. In our design, we store each file’s keyword set in a Bloom filter; that is, only one string is uploaded for one file. Hence, during the keyword processing, the communication and storage cost is greatly reduced, and the overhead is related to the number of files but not to the number of keywords.
Thirdly, in the existing works [13–15, 32, 33], the verification process are mostly based on the cloud computing environment, and users need to perform generous calculations to verify, which is unacceptable for some resource-constrained clients. In this paper, the edge server is introduced to assist users in search and verification, which greatly reduces the computation overhead for users and make them more comfortable.
5.4.2. Construction
Based on the definition described in 5.2, we propose a concrete construction as follows:(1)Init : The publisher generates the public parameters as given below. Generates a bilinear map system , where is the order of and . Sets as the maximum number of files and then the complete file index set . Picks a random generator and a random number and computes , for . Chooses a collision-free hash function used for the Merkle hash tree: . Chooses as the maximum length of Bloom filter, and k independent universal hash functions: . Chooses a keyed hash function . Then the system public parameters are(2)KeyGen : The publisher generates a random string as the hash key and chooses a random number as the secret key, then the public key is(3)Encrypt : The publisher encrypts the files and keywords as follows: Firstly, for each file index , we get those as follows: Generates an empty Bloom filter . Randomly chooses an integer and computes two encryption auxiliary values: For the keywords contained by the -th file, we get Then set . Secondly, the publisher generates a MHT with leaf nodes to guarantee the correctness of the file auxiliary values. Then the MHT is built as follows: Each leaf node’s value is a hash of two auxiliary values such as Each nonleaf node’s value is a hash of its two children nodes and then the root node’s value can also be determined. Finally, the publisher uploads to the cloud and adds and to his/her public key as Note that the cloud can rebuilt the MHT since it has the encryption auxiliary values .(4)Authorization : For user’s public identity and authorization file subset , the publisher computes as and then sends the authorization key to user and identity key to edge.(5)Trapdoor : For the keyword , the user generates the search query and sends to edge server as When the search query is recieved, the edge first verifies the user’s public identity, then uses the corresponding identity key to compute the trapdoor, and sends to cloud:(6)Search : On receiving , the cloud server does the following works. Firstly, for each file index : Computes the corresponding ciphertext for the -th file as Gets the matching result of the -th file through the Bloom filter as If , the file is added to the result set . Secondly, the cloud server constructs the verification proof as follows: Auxiliary values . Merkle hash tree proof is denoted as : where is a list of nodes that denotes the path from leaf node to the root node of Merkle hash tree. Let be a list of leaf nodes excluding subset . If there are sibling nodes in , then the sibling nodes are replaced by their parent nodes, which is denoted as . While is the list containing the hashes of the nodes in , which can calculate with the hashes of the nodes in set , then the verification proof is Finally, the cloud sends the search result and proof to the edge. Take the case of = 8 as an example, as shown in Figure 3. Assume = and search result = , then the verification proof is generated as follows: Auxiliary values: . The from the node to the root node is generated such that Leaf node , Leaf node , Leaf node , Then is computed as Finally, we get(7)Verification : Whenthe search proofs are recieved, the edge server does the following work.

Firstly, for each file index , the edge computes , recovers based on , and then verifies if . If holds, then is correct. Otherwise, it is .
Secondly, for , the edge computes the auxiliary verification value as
Finally, the edge sends the search result and auxiliary verification value to user.
After receives the auxiliary verification value, the user can verify the search result:
If for each , , and for each , , and then it outputs 1, which means the search result is correct and complete. Otherwise, it outputs 0.
6. Requirements Analysis
6.1. Security
We will prove that our scheme is IND-SF-CKA secure under the standard model based on the Decision -BDHE assumption. The security is verified by the below theorem.
Theorem 1. If the decision -BDHE problem is hard to solve, then our proposed scheme satisfies the IND-SF-CKA security.
Proof. Suppose there is an Adversary Ad which can destroy the IND-SF-CKA security of the proposed scheme, then a Challenger Cha can build an Algorithm D to solve the decision -BDHE problem. This contradicts our hypothesis that the decision -BDHE problem is hard to solve; thus, it is proved that the scheme is IND-SF-CKA secure.
Suppose D is given as an instance of decision -BDHE problem, which includes a bilinear pairing system and parameters , for , where is unknown. Then D is further given as , . At last, the input is , and D needs to decide if or is a random value in .
D simulates the Challenger Cha to begin a game with Ad as follows:(1)Initial: D receives the file set that Ad wants to be challenged. Then D gets the parameters of the decision -BDHE problem.(2)Setup: D generates some other parameters and then provides them with the given parameters to Ad as follows:(1)Chooses a hash function .(2)Sets as the keyword space.(3)Randomly chooses , a file index , and sets , where is from the given parameters , which is corresponding to the chosen class , then the is which is unknown. Then D sends to Ad.(3)Process 1: Ad carries out a series of adaptive queries: Authorize query: Ad can query any file set , where . Then D computes and sends back. Trapdoor query: Ad can query any file set and any keyword , where , . Then D generates , and sends to Ad.(4)Challenge: When Ad determines to finish Process 1, it sends two equal length keywords to D, which cannot be the same as the Trapdoor query of Process 1 before. Then D performs the following operations:(1)Randomly picks a coin .(2)Sets , where is unknown, thus Here, , then is sent to Ad. Notice that if , then , and is a valid ciphertext of encrypting the keyword with the class . Otherwise, is a random element in .(5)Process 2: Similar to Process 1, Ad continues to conduct Authorize and Trapdoor queries, and D adopts the same strategy to reply.(6)Guess: Ad guesses the value of and outputs . If , then D outputs 1, which means . Otherwise, the output is 0 implies is a random value in .(7)Probability analysis: If , then is valid. Otherwise, is a random value in , and is an invalid ciphertext. In such case, the advantage of Ad is equal to . If Ad successfully implements IND-SF-CKA to the scheme by the advantage of as , then D can solve the decision -BDHE problem by the advantage . However, this is contrary to Definition 2, so Ad cannot implement IND-SF-CKA to the scheme by the advantage of .Therefore, the proposed scheme satisfies the IND-SF-CKA security.
6.2. Consistency
Proof.If is collision-free, then there must be different keywords so that . According to equations (3)–(5), we can get
Theorem 2. If is a collision-free hash function, then the proposed scheme is consistent.
Then for the same keyword, the cloud uses Trapdoor to generate the matching ciphertext , which is equal to the encrypted keyword generated by the publisher, then
Therefore, the proposed scheme is consistent.
6.3. Correctness and Completeness
Theorem 3. If are random and the Bloom filter has a low false-positive rate, then the user can verify whether the result is correct and complete.
Proof. Correctness: In Theorem 2, our scheme is proved to be consistent. Then (27) can be used to ensure the return files are correct. Suppose the Bloom filter has a low false-positive rate, when (27) is equal to 1, it means the -th file contains the keyword : .
Completeness: In our scheme, for the file set , and given a query keyword from a user, the cloud returns the search result . With equations (7) and (8), if for each , it satisfies , and for each , it satisfies , then , where , so that the search result is complete; that is, the cloud server returns all files that include the keyword and does not return the files that do not include the keyword . Otherwise, it is incomplete. Meanwhile, the correction of the encryption auxiliary value is guaranteed by the Merkle hash tree, which the cloud cannot forge.
Therefore, the user can verify whether the result is correct and complete.
7. Performance
7.1. Performance Analysis
In this section, we compare the other two related schemes [14, 15, 29] in terms of various costs so as to analyze the performance. We define the below notations in Table 1.
7.1.1. Functionality
We show the function comparison in Table 2, and we can see that all schemes realize the function of fine-grained authorization. In the verification function, scheme [14] can only verify the correctness, and scheme [15] can verify the correctness and completeness but exists security defect, and scheme [29] does not consider the verification module, while our scheme implements all the verification in the cloud-edge environment.
7.1.2. Computation Cost
The comparison results of computation cost for several schemes are shown in Table 3, and it demonstrates that our scheme has less computation overhead than schemes [14, 15] on the whole.
In the phases of Init, KeyGen, Extract, and Trapdoor, all four schemes have the same efficiency, which is because the parameters and data generated are alike. Note that the hash computation overhead is , which is far more cheaper than others, so we did not include it to the statistic.
In the Encrypt phase, our scheme generates parameters and ciphertexts, which is the same as scheme [29]. While scheme [14] needs to generate parameters and ciphertexts, and scheme [15] generates parameters and ciphertexts, so the Encrypt process overhead of our scheme is less than schemes [14, 15].
In the Search phase, the overhead of our scheme is the same as scheme [29] and is less than schemes [14, 15].
In the Verification phases, the user overhead of our scheme is less than schemes [14, 15], which is because the edge server undertakes most calculations.
Then, the total computational cost of our scheme is less than schemes [14, 15].
7.1.3. Communication Cost
Table 4 shows the communication overhead comparison as it can be seen that our scheme has less communication overhead than other schemes overall.
In the Encrypt phase, our scheme has less cost than other schemes and that is because our scheme uploads only one Bloom filter value for one file, and other schemes need to upload all the encrypted keywords. Then the communication cost of our scheme for keyword cipher text processing is related to the number of files but not to the number of keywords. Note that the number of files is and the number of keywords in each file is , then our scheme only needs to pass Bloom filter values and elements in to the cloud. While scheme [29] needs to transmit elements in and elements in , scheme [14] needs to transmit elements in and elements in , and scheme [15] has the the biggest overhead which needs to transmit elements in , elements in , and elements of Bloom filter.
In the Trapdoor phase, all schemes have the same communication overhead.
In the Search phase, all four schemes need to return -bits search results, and scheme [14] still needs to transmit two auxiliary values for verification, and scheme [15] still needs to transmit three auxiliary values for verification, while our scheme needs two auxiliary values .
Since the scheme [29] has no verification function, the communication cost of our scheme is higher than scheme [29] but is less than schemes [14, 15].
7.1.4. Storage Cost
We display the storage cost comparison in Table 5, which demonstrates that our scheme has less storage overhead than other two schemes in general.
In the publisher/data owner side, all schemes store one key, and then they have the same cost with .
In the user side, our scheme stores two keys, and other schemes only need to store one key.
In the cloud side, our scheme has less storage cost than other three schemes. In our scheme, the cloud server only needs to store one Bloom filter value for one file, while other two schemes need to store all the encrypted keywords. Therefore, the cloud’s storage cost in our scheme is only related to the number of files and others are related to the number of keywords. Then our scheme needs elements in and elements of Bloom filter to store, while scheme [29] stores elements in and elements in , scheme [14] stores elements in and elements in , and scheme [15] stores elements in , elements in , and elements of Bloom filter.
Therefore, our scheme has less storage cost than other three schemes.
7.2. Experimental Results
To assess the actual performance, we carry out the comparison experiments with schemes [14, 15, 29]. We use the latest JPBC library to deploy the archetypes of these schemes. Our experiments run on a computer with Intel Core i5-6550 CPU working in theWindows 7 system. We used the Enron database as the source, which extracted 10 thousands documents, 6 thousands keywords, and 80 thousands keyword-document pairs, of which most files matched fewer than 10 keywords. We selected the Type-A pairing to complete the specific algorithm, and then experimented with different number of files and keywords within four schemes. The experiment results are consistent with our performance analysis, as shown in Figure 4.

(a)

(b)

(c)

(d)

(e)

(f)

(g)
In the Init and Authorization phases, all schemes have the same computation cost, which is liner increasing with the number of files, as shown in Figures 4(a) and 4(c).
In the KeyGen and Trapdoor phases, all schemes have the same operation time for different numbers of files, as shown in Figures 4(b) and 4(e).
In the the Encrypt and Search phases, our scheme has the same operation time as scheme [29], which is less than schemes [14, 15], as shown in Figures 4(d) and 4(f).
In the Verification phase, Figure 4(g) shows that the user side of our scheme has less computational time than schemes [14, 15], which is because the edge completes most of the calculations, and the user only need to hash to get the verification results. Note that scheme [29] cannot achieve the verification function.
To sum up the experiments, the overall operation time of our scheme is lower than schemes [14, 15] with all processes.
8. Conclusion
In the edge computing environment, we proposed a new scheme of verifiable data search with fine-grained authorization, which not only realizes file-oriented search authority management but also verifies the correctness and completeness of results. In addition, with the assistance of edge server, clients with limited resource can easily carry out the tasks of search and verification. Besides, we proved that our scheme is verifiable and secure, and the secure model did not rely on the Random Oracle. To assess the practicability of the proposed scheme, we implemented it and conducted experiments in a simulated environment. As expected, our scheme effectively reduced the computation, communication, and storage costs. In the future, we intend to focus on the dynamic management of users and files, as well as the further authorization to prevent the key abuse. Also, we want to instantiate the data sharing model with the blockchain technology.
Data Availability
The processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This research is supported by National Natural Science Foundation of China (No. 61932010) and Science and Technology Project of Guangzhou City (No. 201707010320).