Abstract

In the cloud storage applications, the cloud service provider (CSP) may delete or damage the user’s data. In order to avoid the responsibility, CSP will not actively inform the users after the data damage, which brings the loss to the user. Therefore, increasing research focuses on the public auditing technology recently. However, most of the current auditing schemes rely on the trusted third public auditor (TPA). Although the TPA brings the advantages of fairness and efficiency, it cannot get rid of the possibility of malicious auditors, because there is no fully trusted third party in the real world. As an emerging technology, blockchain technology can effectively solve the trust problem among multiple individuals, which is suitable to solve the security bottleneck in the TPA-based public auditing scheme. This paper proposed a public auditing scheme with the blockchain technology to resist the malicious auditors. In addition, through the experimental analysis, we demonstrate that our scheme is feasible and efficient.

1. Introduction

With the rapid development of the cloud computing, users can access the cloud services more economically and conveniently today: for example, the cloud users can outsource the numerous computing tasks to the CSP and reduce the purchase of local hardware resources [1]; besides, with the help of cloud storage services such as Amazon, iCloud, and Dropbox [2], users can put aside the geographical restrictions and upload the local data to the CSP, with only a small amount of payment but a great reduction of local storage resources and more convenience of the data sharing with others. For the enterprise users, due to the explosive growth of business data, enterprises need to spend high cost to purchase software/hardware resources to build an IT system and maintain a professional technical team to manage this system, which causes extra burden to enterprises. Hence, the “pay as you go” service mode of the cloud storage is more convenient and practical. Users can dynamically apply for the storage space according to their data volume from the CSP, so as to avoid resource waste through the elastic resource allocation mechanism.

Although the cloud storage service has a broad market prospect, there are still many data security problems to be solved. Many famous CSP have experienced information disclosure and service termination [3], such as iCloud’s information disclosure, Amazon cloud’s storage outage, Intuit’s power failure, Sidekick’s cloud disaster, and Gmail’s email deletion. On August 6, 2018, Tencent cloud admitted to the user’s silent error caused by the firmware version of the physical hard disk; i.e., the data written is inconsistent with the data read, which damages the system metadata [4]. Therefore, solving the data integrity problem not only can enhance the user’s confidence in the cloud storage services but also can effectively promote the development of the cloud storage services industry. Since cloud computing has become the basic infrastructure at the era of big data, the data security is the primary concern of cloud users.

However, in the practical applications, due to the system vulnerabilities, hacker attacks, hardware damage, human operation errors, or even maximizing the interests, CSP may delete or damage some user’s data [57]. For example, the hospital outsourced all the electrical disease records to the CSP, but CSP may lose part of the stored data. It will cause a great loss to the users when these records cannot be retrieved. In order to avoid responsibility, the CSP may not actively inform the data owners after the data is damaged; in addition, in some special service models, CSP claims to provide multibackup storage service, but in the actual process, they only provide ordinary single-backup storage service and cheat the consumers to obtain additional service fees. All of these factors will cause the cloud users unable to trust the CSP fully.

The traditional method of checking the integrity of remotely stored files is to download all the data from the CSP to the local machine; then, the data owner checks it locally by computing the message authentication code or signature [811]. However, if the large amount of data has been stored in the remote cloud server, such as for the online retailer like Amazon that produced the hundreds of PB data every day, it is unrealistic to download all these data to the local machines every time when checking the integrity, because this will cause a lot of bandwidth/storage resources waste; on the other hand, the integrity checking is a periodic task, and it is expensive for mobile devices with limited resources to execute locally [12]; for the fairness at last, it is not reasonable to let either part of the CSP or data owners audit after the data corruption, so it is an ideal choice to introduce a trusted third party to replace CSP or data owners to check the data integrity [13] (Figure 1). In this model, the client sends a request to the auditor for auditing delegation; then, the auditor executes a challenge and response protocol to check the integrity. At last, the auditor gets the auditing result and sends it to the client. However, after the third-party auditor (TPA) has been introduced, the problem of privacy disclosure is also produced. For example, the malicious auditor obtains the data owner’s identity information in the auditing process, so as to know which part of the stored data is more valuable to the user [14]; in addition, it is possible for the TPA to know the content of the stored data block in the interaction with CSP [15].

In 2003, Deswarte and Quisquater [8] proposed a remote data integrity checking scheme based on the challenge-response protocol for the distributed system. Although their scheme does not need to download all the data when checking the remotely stored data, their scheme causes a large number of modular exponential operations on the server side resulting in large computing overhead; besides, the client needs to maintain all the data backup locally. In 2004, Sebe et al. [9] proposed a remote integrity checking scheme based on the Diffel-Hellman protocol. In their scheme, the client needs to store -bits data for each data block to be stored, that is to say, only when the size of the data block is much larger than that their scheme has practical significance (otherwise, it is not better than storing all the data locally). In 2005, Oprea and Reiter [10] proposed a scheme based on the tweakable encryption. However, the client needs to download all the files in the checking phase, and their scheme aims at data retrieval, which is not suitable for the scenario of data integrity checking. In 2006, Schwarz and Miller [11] solved the data security problem of remote storage across multiple servers based on algebraic signature. However, the computation cost in the client side increases dramatically with the increasing of the data blocks to be checked.

The proposed schemes introduced above have the same problem: the client needs to access the complete data back-up; however, it is not suitable in practice obviously as mentioned before. Many scholars have carried out research on this issue later. In 2007, Ateniese et al. [16] proposed the concept of provable data possession (PDP) firstly based on RSA homomorphic linear authenticator and random sampling technology. The user can check the data stored in the remote server without downloading all the data to the local machine thus solving the defect existed in the early proposed schemes; however, their scheme only supports the static data. In 2008, Shacham and Waters proposed two improved schemes based on BLS short signature [17]: the first scheme based on BLS signature supports infinite time public verifications on the data; the second scheme calculates the authenticators using pseudorandom function but does not support public verification.

Except of the static data, users may also add, delete, or modify the remote data; these dynamic operations will change the index of the data block resulting in the invalidity of the original authenticators, as shown in Figure 2. If all the authenticators are recalculated each time when the data owner performs dynamic operations, a lot of computing and communication cost will be produced. Therefore, many scholars studied the dynamic data-supported schemes. In 2008, Ateniese et al. [18] proposed the dynamic PDP scheme based on symmetric key firstly. However, for the reason that their scheme is based on symmetric encryption, it does not support public auditing. In reference [19], Erway et al. introduced a dynamic PDP scheme that can support dynamic data using rank-based skip list technology. In reference [20], Zhu et al. proposed a scheme with an indexing-hash table to support the effective update of the dynamic data.

In 2011, Hao et al. [21] expanded the scheme of Sebe et al.’s scheme [9] and proposed a dynamic auditing scheme in block level based on RSA homomorphic tag. The so-called block level dynamic means that the data owners can insert, delete, or update data blocks, but after the update, they still need to recalculate the authenticators which is not flexible.

In the practical applications, the integrity checking task is performed by the TPA and most of the schemes proposed later support public auditing. In 2009, Wang et al. [13] proposed an integrity checking scheme with the TPA firstly based on BLS short signature and MHT (Merkle hash tree). In this scheme, any entities in the network can challenge the CSP to check the integrity of the data stored on the cloud server, but this scheme does not support the full dynamic operations on the data.

Although the introduction of the TPA brings many benefits, it also brings new security and privacy issues. Therefore, the public auditing scheme supporting privacy preserving has become a hotspot recent years. In 2010, Wang et al. [14] proposed a public auditing scheme supporting content privacy preservation based on the random mask technology. This scheme supports batch verification of multiuser tasks. However, due to the large number of verification tags generated on the server side, the system suffers a large storage burden. In 2012, Wang et al. [15] proposed a public auditing scheme to protect the identity privacy of the group users based on group signature technology, but the group signature produced huge computing cost in the data owner’s side, and their scheme did not consider the situation that the users can leave and join the group dynamically. In their scheme, users need to recalculate the authenticators of all the stored data block when the group key has changed; in 2014, Wang et al. [22] proposed an auditing scheme based on ring signature technology, which can protect the identity privacy of group membership and support group members to join/leave the group dynamically, but the efficiency of their scheme is decreased with the increasing number of the group members, and the malicious users cannot be tracked in their scheme.

In the process of authenticator generation phase, a large number of signature operations are involved; however, many of the existing terminal equipment are embedded devices with low-power capacity such as mobile phones or sensors in IoT applications; therefore, public auditing schemes for low-power equipment have also been studied: in 2015, He et al. [23] proposed a public auditing scheme based on the certificateless cryptosystem and applied it into the cloud-assisted wireless body area networks. Based on their certificateless mechanism, certificates do not need to be transferred and stored compared with the previous proposals thus reducing the bandwidth resources; the users do not need to do the CRL (certificate revocation list) querying which greatly saves the computing resources. In 2016, Li et al. [12] proposed two auditing schemes for low-performance equipment based on online-offline signature technology. In the first basic scheme, the TPA needs to store some offline signature information, so it is only suitable for users to upload some short data (such as a phone number) in the cloud; in the second scheme, the author solved the problem that the TPA needs to store a large number of offline signatures.

In 2017, Li et al. [24] pointed out that most of the existing schemes are based on the PKI infrastructure and the security of these schemes depends on the security of the key and then proposed a public auditing scheme based on fuzzy identity signature technology. In this scheme, the user’s identity (ID) is the public key, which improves the security of the system. However, Xue et al. [25] pointed out that Li et al.’s scheme cannot resist a malicious auditor’s attack; Yu and Wang put forward a scheme to resist key disclosure attack in the literature [26], which guarantees the forward security of the system by supporting the key updating mechanism, and the updated keys can still audit the previous data block tagged with the old keys.

In 2013, Liu et al. [27] proposed a public auditing scheme based on the rank-based Merkle-hash tree to improve the efficiency of the traditional hash tree algorithm. However, this algorithm causes a lot of computation cost to the TPA. If there are a large number of data blocks, the TPA needs to spend a lot of time to calculate the path of the Merkle tree. Yang and Jia [28] proposed a scheme based on index table structure and BLS signature algorithm, which supports the PDP mechanism of full dynamic data operation. In their scheme, because the index table is used to store the metadata of block file through a continuous storage space, the deletion and insertion move a large number of data. With the expansion of user data scale and the increase of the number of block files, the time cost of deletion and insertion will increase dramatically, which directly leads to the increasing of verification time cost after dynamic operation and reduces the auditing efficiency. In 2016, Li et al. [29] proposed that a PDP auditing model based on the LBT structure (large branching tree proofs of data possession, LPDP) to solve the problem of the authentication path is too long in building the MHT. LBT adopts a multibranch path structure, and the depth of the LBT to be constructed decreases with the increasing of out-degree, thus reducing the auxiliary information in the process of data integrity checking, simplifying the process of data dynamic update, and reducing the calculation overhead between entities in the system. In 2017, Garg and Bawa [30] added indexes and timestamps to the MHT structure introduced in the scheme [13] and proposed a rist-MHT (relative indexed and time-staged Merkle hash tree) structure. Based on this structure, they proposed a PDP mode. Compared with the MHT structure, the rist-MHT structure shortens the authentication length in MHT, thus reducing the time cost of node query. On the other hand, time stamp attribute gives the authenticator data freshness. However, although these algorithms based on MHT hash tree [13, 27, 30] avoid downloading all the data in the auditing process, the correct verification results can only prove that the cloud server stores the hash tree but not the uploaded data.

In recent years, many scholars have carried out researches on the other issues such as group user revocation, data deduplication, sensitive information sharing, and antiquantum attack.

In 2020, Zhang et al. [31] pointed out that in the existed group sharing schemes, user revocation results in the large computational cost of the authenticator associated with the revoked users, so they proposed an identity-based public auditing scheme that can support user revocation, in which the revoking of malicious user does not affect the auditing of the previous data blocks.

Young et al. [32] combined the ciphertext deduplication technology [33] with a public auditing scheme. Because a large number of data uploading work are transferred to the CSP, the client only needs to carry out a single tag calculation step, which is suitable for a low-performance client environment.

Shen et al. [34] proposed a public auditing scheme that can hide sensitive information when the data owner was sharing the data with other users based on IBE (identity-based encryption). In this scheme, the role of data transfer (sanitizer) is added to transfer the sensitive data and its signature to realize the privacy preservation of the sensitive information in a shared medical record.

In 2019, Tian et al. [35] pointed out that up to now, none of the schemes above can meet all the security properties and put forward a new scheme. In the process of tagging, the user’s signatures will be converted into group signatures, thus protecting the identity privacy of the users; in the auditing process, the content privacy is protected by using mask technology; all data operations will be recorded in the operation history table so that all illegal activities can be tracked.

Xue et al. [25] proposed a public auditing scheme based on blockchain to resist malicious auditors. In their scheme, the challenge verification information is generated based on a bitcoin algorithm. However, the final auditing result of their scheme still relies on TPA uploading to the blockchain, which does not eliminate the threat of malicious TPA fundamentally.

Through the analysis above, we can see that the proposed schemes have the following defect present: the security of these schemes relies on the trusted third party—TPA. Although the TPA brings advantages of the fairness and efficiency to the auditing process, it cannot get rid of the possibility of the malicious auditor, because there is no completely trusted third party in the real world. Although some scholars have conducted research on privacy protection problem in TPA based on public auditing schemes with group signature, ring signature, and other privacy protection technologies, the TPA needs to be treated as a semitrusted entity and the risk of malicious auditor has not be eliminated fundamentally. As a new technology, blockchain technology can effectively solve the trust problem among multiple individuals, which is suitable to solve the security bottleneck problem in the TPA-based public auditing scheme. This paper intends to solve the malicious auditor problem in the public auditing schemes combined with blockchain technology.

Contributions. The main contributions are summarized as follows:(1)We propose a framework of public auditing scheme without a trusted third party based on blockchain and give a basic work-flow(2)We propose a certificateless public auditing scheme based on the proposed framework to resist the malicious auditor and key escrow problems(3)We present a detailed security analysis of our schemes. The efficiency and security comparison shows that our scheme is better than existing schemes

3. Preliminaries

Definition 1. Bilinear map.
Given a cyclic multiplicative group with order and another multiplicative cyclic group with the same order , a bilinear pairing refers to a map : which satisfies the following properties:(1)Bilinearity: For all and , .(2)Nondegeneracy: There exist such that .(3)Computability: For all , there exists an efficient algorithm to compute .

Definition 2. Elliptic Curve Discrete Logarithm Problem (ECDLP).
Suppose that . Given and , it is computationally infeasible to find out the integer such that .

Definition 3. Computational Diffel-Hellman Problem (CDHP).
Suppose that and , it is computationally infeasible to output the result only with .

4. The Framework of Our Public Auditing Scheme Based on Blockchain

4.1. System Model

In our proposed framework, there are four roles: cloud server provider (CSP), client, key generating center (KGC), and auditors.

4.1.1. Cloud Service Provider

In our scheme, the CSP is a semitrusted entity with strong computing/storage resources, and the client uploads the local data to the remote CSP for storage. The CSP faithfully follows the whole process of the auditing protocol with the other entities; however, he/she attempts to cover up the fact of data corruption.

4.1.2. Client

The client is a cloud storage service user. He/she stores his/her data in the CSP to reduce the storage burden locally. To ensure the integrity of the remotely stored data, the client can delegate the auditor to execute the interactive protocol with the CSP and get the auditing result from the auditor.

4.1.3. KGC

The KGC is a trusted entity in our proposal and generates the public parameters of the whole system and the client’s partial secret key in the certificateless cryptosystem.

4.1.4. Auditor

Auditors are distributed nodes deployed on the blockchain nodes, and the ProofVerify algorithm is deployed on the auditors as the form of smart contract. After getting the proof generated by the CSP, the auditors calculate the checking result and store them into the storage layer of blockchain.

The relationship among these entities is shown in Figure 3.

4.2. The Proposed Framework

In this section, we proposed a basic framework of public auditing scheme based on blockchain technology and give a general work flow. In our framework, in order to solve the problem of malicious attackers in the traditional TPA-based schemes, we use the distributed nodes in the blockchain network as auditors to check the integrity.

Before the client uploads the data to the CSP, it uses the private key issued by the KGC to calculate the linear authenticator of the file. The calculation process divides the file into data blocks for calculation firstly, and then the user uploads the data and the corresponding linear authenticator to the CSP for storage. When the client wants to check the integrity of the stored data in the cloud, the client sends the challenge information (randomly generated integers) and sends it to the auditors and CSP; the CSP calculates the proof according to the challenge information and returns the proof to the auditors.

Auditors are smart contracts deployed on the blockchain nodes, the function of which mainly includes two parts: processing client auditing request and executing the ProofVerify algorithm (the main part of the auditing scheme). The distributed auditors calculate the auditing results according to the proof returned by the CSP, store the results into the storage layer of the blockchain, and maintain a history that cannot be tampered.

Secondly, when the client performs the data updating operations (such as adding, deleting, querying, and modifying) on the stored data, the CSP generates the client’s operation log of this time and compute the multiple signatures on this log by the client and CSP which indicate that all members agree with this result. It should be noted that auditing is a periodic process; it can be arranged every day at a certain fixed period such as after zero clock, but each time the user performs an updating operation, an auditing action will be triggered automatically.

If the client or CSP finds out the stored data has been damaged, they can compare the current auditing results with the previous historical records stored in the blockchain and combine the signed operation logs to determine the responsibility for data damage; because these data are stored in the distributed ledger with nonrepudiation and nontampering, neither party can refuse to admit it.

4.3. Consensus Mechanism of the Distributed Auditing Nodes

When a client sends an auditing request to the distributed auditors, the blockchain network triggers a consensus mechanism, and the data stored in the CSP is audited and stored among the nodes. We build two consensus mechanisms as shown in Figure 4: one is a secure model, and the other one is an efficient model. The following steps show the consensus mechanism between distributed auditors in the auditing process:(1)Users broadcast the auditing requests with challenge information to the blockchain network, and the auditors store the challenge information(2)The two mechanisms are different from this step. In the efficient mechanism, when the CSP receives the auditing requests, the CSP divides the data into parts according to the number of auditing nodes to be received and sends them to different auditors; in the secure mechanism, the CSP does not divide the data into parts but broadcast them to the network and all the distributed nodes can get all the data blocks(3)After receiving the data blocks, each auditor executes the ProofVerify algorithm with the input of the user’s public key and the proof sent from the CSP; in the efficient model (the left one in Figure 4), the auditing task is divided into parts and the auditors only audit partial data blocks to improve the auditing speed; in the secure mechanism (the right one in Figure 4), each auditor audits all the data blocks; therefore, it can resist the attacks from the single malicious auditor(4)Finally, the auditors store the auditing result with the following steps: in the efficient model, the auditors broadcast the auditing result to the other nodes in the same blockchain network, and all the storage nodes can get the full auditing results of the entire request data blocks; in the secure model, the auditors do not need to broadcast the auditing result in the network.

5. The Detailed Scheme

In this section, we give a detailed proposal based on the framework we introduced above. Our scheme is constructed based on Li et al.’s CLPA [24] scheme and Yu and Wang’s scheme IDBA [26].

(1) Setup: with input in the security parameter , the KGC generates the system parameters and the master key executes the following steps:(1)The KGC selects a large prime number , an additive group , and uses the bilinear group generator to generate the bilinear group ; normally, and can be generated simultaneously by using the bilinear group generator. The KGC chooses a bilinear pairing (2)Let be a generator of group . The KGC selects a big integer randomly as the master key, keeps secretly, and computes the public key (3)The KGC publishes the system parameters , where are five hash functions

(2) PartialPrivateKeyExtract: the client registers with the KGC to extract the partial private key with the following steps:(1)The client submits his/her identity to the KGC(2)After receiving the client’s identity , the KGC chooses a random big integer and computes and mode (3)The KGC sends the partial private key to the user secretly

(3) SetSecretValue: the client sets his/her secret value as follows:(1)The client chooses a big integer randomly as his/her secret value(2)The client keeps secretly

(4) SetPublicKey: the client sets his/her public key as follows:(1)The clients computes (2)The clients sets as his/her public key

(5) SetPrivateKey: the client sets as his/her private key.

(6) Store: the client with identity , private key , and public key runs this algorithm to generate the integrity checking tags for the data file . Firstly, the data file should be divided into blocks ; for every data blocks , the client computes the tags with the following steps:(1)The client computes and (2)The client computes and sends to the CSP, where is the unique identity of and is a random number

(7) Audit: to check the integrity of the uploaded data, the client executes the following challenge-response protocol with CSP and auditors:(1)Challen: the client generates a challenge information as follows:(i)Selects a random -element subset of the set (ii)Selects a random for each (iii)Generates the challenge information: and broadcasts it in the network; CSP and all the auditors can get it(2)ProofGen: after receiving the challenge information from the client, the CSP generates a proof which proves the correctly possession of selected blocks as follows:(i)Chooses a big integer randomly(ii)Computes(iii)Broadcasts the proof information to the auditors; if the client chooses to audit in the efficient model, the CSP needs to divide the data blocks into parts and generate the proof information for every set of data blocks; then, the CSP sends them to the auditors separately

(8) ProofVerify: upon receiving the , the auditors execute this algorithm to check the integrity of the data stored in the CSP. Here, the Prof indicates the proof generated by the CSP; in the secure model, the Prof is the proof information of all the data blocks; while in the efficient model, the Prof is the partial proof information. We use the same expression as the Prof here.(1)The auditors compute , , and (2)The auditors check whether the following equation holds

If it is, the auditors output to indicate the correct storage of the data File ; otherwise, the auditors output to indicate data corruption(3)The auditors create an and broadcast it in the network, and all the auditors can get the full auditing result and store them; in the secure model, each auditor can calculate the full auditing result by themselves, and the broadcast operation is not needed

(9) DataUpdate: when the client updates the file in the cloud, a recording log Log is generated by the CSP to record the details of the client’s operation. The CSP and client execute the MultiSign(Log) and broadcast it in the blockchain network for storage, the MultiSign(Log) means the multi-signature of the client and the CSP on the Log. After each data DataUpdate operation finished, the system automatically triggers the Audit phase.

6. Security Analysis and Correctness Proof

This section gives the correctness proof and security analysis of our proposed scheme. We mainly introduced the threat model and discussed the security goals which we have achieved in this part.

6.1. Correctness Proof

The correctness of our auditing scheme can be derived as follows:

To this step, we can see that through the verification of Equation (5), the auditors can check the integrity of the stored data in the CSP correctly.

6.2. Threat Model

Before our security proof, we introduce the threat model of our scheme in this part firstly. Similar to the literature [26], we consider that there are three types of attacks in the public auditing schemes: forgery, replacement, and replay attacks. Each type of the attack is defined as follows:(1)Replacement attack: the adversary attempts to calculate a new block/signature passing the auditing phase by replacing the challenged block and signature with unchallenged or uncorrupted blocks/signatures.(2)Forgery attack: adversary forges the proof information to deceive the auditor/user or forges an auditing result to cheat the user.(3)Replay attack: adversary replays the proof information generated previously attempting to pass the auditing phase.

Similar to the literature [26], we consider that the CSP may launch all the attacks above and the auditor may launch forgery attack. In addition, we consider that external adversaries may launch forgery and replay attacks.

6.3. Security Proof

Theorem 4. Our scheme can resist replacement attacks from the CSP.

Proof. Suppose that the CSP wants to use the well-maintained data blocks and to replace the corrupted block in the file , where . During the auditing process, both the auditors and the client execute the protocol honestly. That is, the client computes in the store phase.
Then, the client sends the tags to the CSP. We denote as here:
Sinceit follows thatWe know that if the can pass the verification phase, the following equation must hold:However, the probability that the following three equations are satisfied simultaneously is negligible:That is, cannot pass the auditing of the verification phase. Therefore, our scheme can resist the CSP’s replacement attacks.

Theorem 5. Our scheme can resist forgery attacks from the CSP or the auditor.

Proof. Suppose that the adversary modifies the data block to . During the auditing process, both the auditors and the CSP honestly execute the scheme. That is, in the Audit phase, the client broadcasts the challenge message to the CSP and auditors in the network. In the ProofGen phase, the CSP computes the following steps:If the modified tag can be passed in the verification phase, the adversary must compute the following:Note that is randomly selected by the CSP and that is randomly selected by the client, so the and cannot be known simultaneously by the same adversary; therefore, the adversary’s modified tag cannot be passed in the ProofVerify phase. Hence, our scheme can resist the forgery attacks from the CSP or the auditor.

Theorem 6. Our scheme can resist replay attack from the CSP.

Proof. If the stored data has been corrupted, the CSP may attempt to pass the auditing phase by replaying another block and its corresponding tag . Then the CSP constructs the tampered proof as follows: we denote as here:Then, we have the following derivation of the ProofVerify process:If the tampered proof can pass the auditing phase, the following equations must hold.Since the hash function is collision resistant, we know that

In other words, the proof shows that the CSP-generated information cannot pass the auditing phase. Therefore, our scheme can resist the replay attacks.

6.4. The Other Security Requirement Discussions

This section discussed that our proposed scheme satisfies the security requirements of auditing schemes. Table 1 gives a brief security comparison of our scheme with the CLPA [23] and IDBA [25].(1)Publicly verifiability: through the correctness proof part, if the client correctly calculates the data tags before uploading the data file, the auditor can perform an interactive algorithm with the CSP and get the real storage situation of the data blocks without the help of the client. Therefore, we say that our scheme achieves the property of publicly verifiability.(2)Privacy preserving: in the process of the data auditing, the auditors can only get the aggregated data blocks and the tags. Based on this information, auditors cannot get any available information about stored data. Therefore, we say that our scheme achieves the goal of privacy protection.(3)Batch auditing: through the derivation of the correctness analysis, in the process of the auditing phase, multiple data blocks can be sampled at one time, and multiple data auditing tasks can be batch verified to improve the auditing efficiency. Therefore, our scheme achieves the goal of the batch auditing.(4)Key escrow resistant: similar to the scheme CLPA [23], our scheme is based on the certificateless cryptography; the secret key to generate the authenticator has two parts which is derived from the KGC and client, respectively. Therefore, the KGC cannot get the full of the user’s secret key like the scheme IDBA [25] based on the identity cryptosystem.(5)Malicious auditor resistant: in our auditing scheme, the auditing result is calculated by the distributed nodes; none of them can tamper the auditing result only if the attacker controls 51% of the nodes in the network; compared to the existing blockchain-based public auditing scheme [25], the ProofVerify phase is transferred to the blockchain in the form of smart contract, instead of relying on the third-party auditor to upload the auditing result to the blockchain; thus, the possibility of the auditor creating the false result is eliminated fundamentally; besides, for the reason that the data blocks are confused with the mask code and the auditors can get nothing about the auditing data, the privacy of the data content has been protected.

6.5. Experimental Analysis

This section compares the performance of our proposed scheme with those of He et al.’s CLPA [23] scheme and the scheme IDBA [25]. Table 2 shows the security overhead of these schemes in the Store phase on the client side and the ProofVerify phase on the auditors’ side. From Table 2, we can see that in the Store phase, the time consumption of the authenticator calculation in our scheme is slightly higher than those in the other two schemes, because we have done some additional processing in this phase to resist the forgery attack and replay attack in the ProofVerify phase.

In the ProofVerify stage, because we used the distributed auditors to audit the data blocks, we get better efficiency than the other schemes. We can see that if we do not use distributed auditors for auditing tasks, the computing cost of our scheme is the highest, but after using the distributed processing mechanism in the efficient model, the efficiency has been improved greatly. Table 3 is the notations list we used in Table 2.

Finally, in order to quantify this comparison, we compare these targets with the jPBC, which is a well-known JAVA cryptographic library [36]. The experimental environment is listed as follows: Intel i7 processor with 1.8 GHz clock speeds and 8G RAM in a Win 10 operation system. We compared the computational cost in the tag generation phase and the proof verifying phase in Figures 5 and 6. In the comparison of the auditing phase, we analyze the two cases of and , where represents the number of the distributed auditors in the blockchain network in the efficient model. We can see that in the efficient model, the more auditors we used in the blockchain network, the lower auditing delay will be obtained.

Communication Cost. In the three schemes, the challenge information is the same; in the response phase, the proof returned by our scenario is as follows: . Through the comparison of Table 2, we can find that our scheme has the same communication cost with IBDA and slightly higher than CLPA.

7. Conclusion

In this paper, we pointed out that most of the TPA-based public auditing schemes cannot resist the malicious auditor. To solve this problem, we proposed a public auditing framework with blockchain technology and certificateless cryptography. In this framework, we used the distributed nodes in the blockchain network as auditors to check the integrity and the checking results will be stored into the storage layer of the blockchain with the tamper-resistant manner; the client operations on the data will be recorded as log signed by the data owners and CSP which indicate that all members agree with this result. Anyone can check the historical records stored in the blockchain nodes and combine with the signed operation logs to determine the responsibility for data damage. We gave a detailed proven security proof of our scheme. A comprehensive performance evaluation shows that our scheme is more feasible and efficient than similar schemes.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is partially supported by the National Key Research and Development Program of China (Grant no. 2017YFD0401002-3) and the Six Talent Peaks Project in Jiangsu Province, China (Grant no. 2015-DZXX-020).