Abstract

The digitalization of the modern world and its applications seem to be integrated more with the mobile phones than with any other communication devices. Since the mobile phones have become ubiquitous with applications for nearly all users, they have become a preferred choice for uploading the sensitive information to the cloud servers. Though the drive for data storage in cloud servers is implicit due to its pay-per-use policies, the manipulation of the data present in the cloud servers by hackers and hardware failure incidents, as happened in Amazon cloud servers in 2011, necessitates the demand for data verification at regular intervals over the data stored in the remote servers. In this line, modern day researchers have proposed many novel schemes for ensuring the remote data integrity, but they suffer from attacks or overheads due to computation and communication. This research paper provides solutions in three dimensions. Firstly, a novel scheme is introduced to verify the integrity of the data stored in the remote cloud servers in the context of mobile users. The second dimension is that of reducing the computational and communication overheads during the auditing process than the previous works. The third dimension securely authenticates the mobile user during the auditing process and the dynamic data operations such as block modification, insertion, and deletion. Moreover, the proposed protocol is provably secure exhibiting soundness, completeness, and data privacy making it an ideal scheme for implementation in the real-world applications.

1. Introduction

The modern world which is getting more and more digital everyday has mandated the need for data outsourcing to cloud servers through the mobile phones of individual users to corporate offices [1, 2]. Applications like Google Drive, Google App Engine, OneDrive from Microsoft, Google Picasa, Adobe Cloud, Oracle Cloud, Dropbox, Facebook, and other such applications have made an implicitly compelling scenario ranging from layman to the highly resourceful technocrats to make use of the cloud for data storage. Hence, in this digital world, a person who possesses a mobile phone views this vast world as just a small global village where he has access to any information through Internet and cloud storage.

At the same time, cloud storage has its inherent benefits in terms of elasticity, reliability, pay-per-use model, and traffic adjustability during upload, download, and other situations [3]. In such a context, the mobile users who possess powerful processors and random access memory for processing the data need not store the data in their local storage. Once the data is uploaded to the cloud storage, they may be relieved of the burden from maintaining it. But, this flexible data storage comes with its own inherent disadvantages also, as cited in the work [4]. Newer kinds of attacks on the data storage including the cloud servers have become apparent nowadays. If the sensitive data is captured by the hackers, they can use it for mining business patterns, usage patterns which might indeed pave the way for the loss to the actual data owners. Even the cloud servers may try to hide the fact of data server crashes which will lead to permanent data loss to the data owners.

In order to enable the verification process, first, the user splits the large file to be uploaded into smaller units called file blocks. The user uploads all the blocks to the remote storage area like cloud servers. Later, if the user wants to verify the integrity of the uploaded file, he can do so by making use of some cryptographically verifiable procedure [57].

Similarly, to ensure the data verifiability in the cloud servers, multiple schemes have been proposed by authors in their past literatures. In one such scenario, cloud servers were vested with the responsibility of computing the proof of verification based on all of the blocks stored in the cloud storage [8, 9]. In such cases, if the cloud server had to do this computationally intensive work for hundreds of users simultaneously, it may incur a huge computational overhead. On the contrary, authors like Juels and Kalisi in 2007 [10] claimed that the audit task must be done at the cloud user’s side. This method will not suit the users who work with computationally constrained battery powered mobile phones.

A pioneering attempt by Ateniese et al. [11] in 2008 claimed that a user who wants to audit the file integrity need not access the entire file stored in the cloud, and also, the user can delegate the audit task to a third party. Some works in this line delegated the verification rights to other parties such as trusted third parties or other such entities. In the works proposed [1214], a patient who undergoes a treatment from a doctor makes use of electronic health records stored in the cloud storage and also allows the doctor to create the records and store them in the remote storage on behalf of the patient. A recent work proposed by Yu et al. in 2017 [15] seems to be a worthwhile protocol with public auditing capability with efficient computational and storage overheads. Many verifiable schemes have been proposed by Chen et al. in 2019 [16], Peng et al. in 2019 [17], Fujisaki and Okamoto in 1998 [18], Patra et al. in 2015 [19], and others.

Hence, a research work should be able to authenticate a legitimate mobile user during the audit response phase. In this regard, each mobile user should store a unique authentication parameter in the cloud server before sending any audit request. Hence, during audit response, the cloud server is able to successfully authenticate only the legitimate users and able to identify the intruders to abort their audit requests.

One essential fact to be considered during the protocol design is to make it certain that the utmost security with less computational necessity and the robustness of the protocol must be resistant to attackers and hackers. Thus, though there are multiple methods of ascertaining the integrity of the stored file blocks in the cloud servers, each method suffers from one of the problems such as computational burden of cloud servers or data owners or reliability of the verification procedure by third parties done on behalf of the cloud users, lack of authentication procedures for the cloud servers and cloud users. In this research work, a novel method which will avoid the above shortcomings for verification of data stored in the remote servers has been proposed to enable the user who uploads the file to verify the integrity of the same as well.

Unless the data stored in the remote locations are verified thoroughly to the satisfaction of the user with respect to the security assurance, computational, and communicational capabilities, not only these schemes are prone to attacks but also they would be nonoperational for practical use by the mobile user community which constitutes a larger portion of the Internet users. Based on the necessity to address these issues, the contributions of this research work can be highlighted as follows.

1.1. Contributions of This Research Work

(i)A novel scheme to verify the integrity of the remote data and to authenticate the mobile user during the integrity auditing process is introduced(ii)The presence of the authentication procedure prevents eavesdroppers and hackers from intruding into the system(iii)The proposed work is resistant to attacks and computationally more efficient than the previous works(iv)The presence of valid proofs for completeness, soundness, and perfect data privacy makes this a vital contribution for real-world data audits in cloud

1.2. Organization of This Research Work

The organization of this research manuscript is as follows. Section 2 incorporates the much needed recent and old literary works pertaining to the works proposed for remote data integrity verification and showcases the need for the improvements in them. Section 3 provides a quick review of the preliminaries, and a suitable architecture of the protocol proposed in this research work is in Section 4. The subsequent section shows the construction of the proposed protocol with its novel procedures for data integrity verification processes for files and support for data dynamic operations. Section 6 analyses this work in terms of its correctness, soundness, and data privacy during the audit process. In Section 7, the implementation results of the protocol are compared with various schemes, and the results are tabulated. Section 8 concludes this research work.

2. Literature Survey

Latest advancements such as Internet-of-Things (IoT), fog computing, digital transaction with blockchain-based security assurance, smart cities, cloud computing, and other such technologies enable a mobile user to upload sensitive information to the cloud servers for future processing. Many worthwhile schemes have been put forward by researchers and students of various institutions to ensure the correct possession of data stored in the cloud servers.

Wang et al. in 2010 [20] introduced a similar scheme for the proposal of the efficient auditing framework without requiring the user to maintain a local copy of the data uploaded to the cloud server. This work is resistant to the attacks from the auditor and supports integrity verification for multiple users at the same time. A scheme proposed by Zhu et al. in 2012 [21] enabled the clients to store the files in multiple cloud servers and introduced a scalable integrity verification service with reduced computational and communication complexities based on homomorphic procedures and indexing hierarchies.

A worthwhile contribution from Zhu et al. in 2013 [22] attempts to verify the data integrity of the files stored in the cloud servers by making use of fragment structure, hash table indexes, and probabilistic query-based auditing services for frequent verification processes. Though this work is a novel one of its kind, it lacks the proper authentication of the user during the data dynamic operations and incurs relatively significant computational overhead during the integrity verification process. A work on identity-based remote data integrity verification scheme proposed by Yang and Jia [23] in 2013 provides support for both static and dynamic data operations. In this case, the third-party auditor efficiently does the integrity verification of the data stored in the cloud, and provision has been made for doing batch integrity verification operations for multiple data owners and multiple cloud servers at the same time. But, this work does not authenticate the third-party auditor who verifies the integrity of the data.

Huang et al. in 2014 [24] allowed a third-party verification by utilizing the service of semitrusted TPAs. In this scheme, the TPA is assumed to be partially trusted, and a data owner verifies the proof handed to the TPA by the cloud server. Another work proposed by Wang et al. [25] in 2014 was based on the identity-based verification scheme which avoided the complex public key infrastructure based on complex certification process. Similarly, Yu et al. in 2015 [26] proposed a protocol for the public verification using algebraic signatures of data which prevented the replay and deletion attacks.

Another scheme by Liu et al. [27] in 2017 introduced a lattice-based scheme which is free from certificate verification processes and escapes the quantum computer attacks while ensuring data privacy against the third-party auditor. They have successfully verified the integrity of the stored data in the cloud without making use of the costly certification process. The scheme is resistant to the attacks posed by the cloud server. Though this work is a commendable one, it lacks user authentication during the verification procedure.

A recent work by Ren et al. [28] in 2018 makes use of rb23Tree to prevent the cloud servers from manipulating some of the sensitive data and escaping from the integrity verification procedure. Luo et al. in 2018 [29] proposed an efficient scheme using BLS short signatures which preserve the user privacy incurring only less computational and communication complexities. A very useful recent research work which involves the verification of the integrity audit of cloud data was proposed by Yan et al. in 2019 [30]. This efficient scheme preserves user privacy along with data blindness at a much less computational cost. A well-acclaimed work by He et al. in 2015 [31] preserves conditional privacy and ensures authentication in wireless environments. Also, a notable work from Zhang et al. in 2019 [32] preserves the privacy without using bilinear pairings.

2.1. Gaps in the Literature Survey

Some of the gaps identified include lack of authentication, protocols being susceptible to attacks, and more computational complexity, among others. The lack of authentication may help attackers masquerade in the verification process.

Objectives of the proposed research work are as follows:(1)To invent a novel algorithm for the remote data integrity verification process which is free from attacks(2)To invent a computationally efficient algorithm for enabling remote data integrity verification over the data stored in the cloud servers(3)To introduce a secure authentication scheme to authenticate the mobile users during the secret key generation(4)To enable secure authentication for the challenge response procedure during the integrity verification process(5)To support dynamic data operations such as modification, deletion, and insertion on blocks of the files stored in the remote cloud storage(6)To ensure perfect data privacy from the third-party auditor (TPA) and thereby allowing him only to do the verification process without gaining any information of the file stored in the cloud server

3. Preliminaries for the Proposed Work

3.1. Properties of Bilinear Pairing

Let us assume that and represent two multiplicative cyclic groups whose order is and be the generator of . Now, the bilinear map represents the bilinear pairing if the map exhibits the following three properties:(1)The bilinear property of for all and (2)The nondegeneracy property of where is a generator of (3)The bilinear pairing function is computable using an efficient algorithm

3.2. Notations

The notable notations used in this research work are presented in Table 1.

4. Architecture of the Proposed System

The architecture of the proposed system is shown in Figure 1, and it can be better understood through the following steps. The system manager initializes the system. Then, the third-party auditor registers itself with the system manager. Then, a mobile user who wants to upload data to the cloud server registers with the system manager which in turn uploads the unique parameters of the mobile users to the cloud which enables user authentication during audit response. Apart from this, the system manager sends a parameter composed of its master secret which enables the mobile user compute its own secret key. Now, a mobile user who wants to do audit of its data, sends an audit request to the third-party auditor. Accordingly, the third-party auditor creates an audit challenge and sends it to the cloud server. If the user authentication is successful, the cloud server generates an audit response which is verified by the third-party auditor, and the verification status is sent to the mobile user who initiated the audit request. Besides, if a data owner wishes to update, modify, or remove any block of data uploaded to the cloud, it can be achieved as well through dynamic data operations.

The proposed system consists of four major entities such as data owner who is the mobile user, the cloud server (CS), the system manager (SM), and the third-party auditor (TPA).

4.1. Data Owner

They are the mobile users who want to upload the sensitive files to the cloud storage due to lack of local storage and maintenance capabilities. At regular intervals, they will ensure the integrity of the remote data by sending auditing requests to the CS through the TPA. Moreover, the mobile user can modify, delete, or insert blocks of a file in the CS which was previously uploaded by it.

4.2. System Manager

It is the entity which initializes the system and is responsible for generating the secret key to the mobile user and the TPA for verification purposes. This entity also uploads authentication parameters of mobile users to the cloud server to enable secure authentication of the mobile users by the cloud servers.

4.3. Cloud Server

It represents the computer farms with vast potential for data storage sold in pay-per-use cost models. It is where the huge files which are divided into individual blocks of the mobile users are stored with provisions for integrity verification. The files along with their corresponding tags are stored to enable the auditing of the outsourced data of mobile users and authentication of mobile users. During the audit process by the TPA, the CS receives a challenge from the TPA and accordingly sends a response back to the TPA to enable integrity verification.

4.4. Third-Party Auditor

It is the entity which does the auditing work on behalf of the mobile user. The TPA receives an auditing request from the mobile user and creates an auditing challenge based on some secret parameters and sends it to the CS for the authentication of the mobile user and the data integrity verification. The CS creates the corresponding audit response and sends it to the TPA. Now, the TPA verifies whether the received response is a genuine one or not. If successfully verified, this entity sends the auditing response to the mobile user. At regular intervals, they will assure the integrity of the remote data through auditing requests.

5. The Proposed Scheme

5.1. Initialization of the System

The SM initializes the system by selecting two multiplicative cyclic groups whose order is , and the bilinear map is defined by

It selects a hash function to produce the message digest and randomly selects an integer and the points . It computes

Now, the publishes the parameters of the system such as . The parameters and are kept as a secret by the .

5.2. Mobile User Registration in the System

This phase consists of the following steps between the mobile user and the system manager . The sends to . Here, refers to the identity of , and refers to the public key to . The in turn computes

It sends the computed values and to the mobile user . Now, the computesand verifies whether as follows:

If successfully verified, ascertains that it has finished its registration with while sharing its public key with . Also, stores of the user in its local storage. By now, and the have identified each other. This phase avoids any attacks posed by the attackers during the key generation, file upload, and integrity verification processes. Similarly, the TPA registers itself with the by sending its identity and its public key . Thus, the TPA and the SM identify each other as well.

5.3. System Manager Generating the Secret Key for the Mobile User

In this phase, the mobile user , after successful registration, makes a conversation with the SM in order to generate an exclusive secret key pertaining to this mobile user.(1)The selects at random from and computes . It sends to SM as depicted in Figure 2.(2)The on receiving the parameters, retrieves the value of based on the identity from its local storage. It computes and checks whether it is the same as the received value. If not verified, , and then the operation aborts. If verified, it selects at random from and computesNow, the sends to the mobile user . Besides, the updates the authentication table as cited in Table 2 with the identity and the corresponding parameter of the .The system manager SM, at the end of each day, uploads this authentication table along with the signature for the authentication details to the cloud server CS. The system manager computes the signature as , where refers to the hash of the details in the authentication table and refers to the identity of the cloud server. The cloud server verifies the received data by checking whetherIn equation (10), refers to the hash value pertaining to the authentication table as received by the cloud server given in Table 2. Hence, the proof for this equation isTherefore, if the received hash value and the computed hash value are the same, the equation becomes valid which ascertains that the received authentication table is completely intact.(3) on receiving the encrypted message , decrypts it using its corresponding private key (using a suitable algorithm like RSA) and gets the parameters such as . Since this message is confidential, it avoids any man-in-the-middle attack or other such attacks during its transit from the SM to .(4)Now, the mobile user computes the secret key K3 as follows:(5)Besides, the mobile user authenticates the system manager by verifying whether

Since the value of can be known only to the SM, this verification procedure successfully authenticates the to the mobile user .

5.4. Tag Generation and File Upload by the Mobile User

Let us assume that the mobile user wants to upload a sensitive file to the public cloud server . To store the file without any integrity breach in the middle and in order to be able to ascertain the genuineness of the file later, the mobile user performs the following steps:(1) divides the file into blocks. Let us assume that refer to the individual blocks of that file.(2)It randomly selects to be used in the remote integrity verification process.(3)It takes each block and computes the corresponding block tag as(4)The mobile user stores all the blocks of the file along with the corresponding tags of those blocks in the cloud server, where refers to the block tag of the corresponding block .(5)Finally, deletes its local copy of the file from its local storage.

5.5. RDIC Challenge by the Third-Party Auditor

After a certain interval, the mobile user wants to verify the integrity of the file which is stored in the public cloud. In order to verify the integrity, requests SM to send the public key to the and gets it. Now, sends the parameters to the TPA as , where is the public key of the . Subsequently, TPA creates a challenge as follows based on the few randomly selected file blocks:(1)The TPA selects a random integer for each of the random selected block to be verified.(2)In order to identify the mobile user to the cloud server CS and to avoid the man-in-the-middle attack and other possible attacks, the TPA computes the parameter as follows:(3)Also, the TPA computes in which is the public key of and refers to the secret parameter of .(4)Now, the TPA creates the challenge based on the corresponding block numbers and the randomly generated integers for those blocks as . For example, consider the case of , where refers to block number and refers to the corresponding integer which was randomly chosen by the TPA.(5)At last, the TPA sends to the cloud server CS.

5.6. RDIC Response by the Cloud Server

Upon receiving the challenge from the TPA, the cloud server creates the response as follows:(1)Firstly, the CS authenticates the by checking whether the equationholds true or not. Here, refers to the parameter corresponding to the mobile user which is present in the user authentication table. The proof for the equation can be understood as follows:That is, the verification ofenables the cloud server to authenticate the mobile user. If the authentication is not successful, the CS aborts the integrity verification process.(2)It computes the parameter(3)It also computes(4)Sends to the TPA who is waiting for the response.

5.7. Integrity Verification by the Third-Party Auditor

To verify whether the file uploaded long back was kept intact by the cloud server, the TPA does the integrity verification as

This proof for the above equation ascertains the fact that the individual blocks of the file which is stored in the remote server is kept intact by the cloud server. The overall interaction between the TPA and the CS during the auditing process is depicted in Figure 3.

5.8. Data Dynamic Operations

Under some circumstances, the information stored in a sensitive file may need to be modified or inserted or deleted. This research work strives to ensure the same with secure authentication procedure as follows.

For the file block modification operation, the mobile user wants to replace the block of the file with .(1)The mobile user finds that the block of the file needs to be replaced by in the cloud server. Hence, it computes the corresponding block tag for the block as(2)Now, the mobile user computes and as in the challenge generation process.(3) sends the parameters to the cloud server in which refers to the file block modification operation, refers to the block which is to be updated, refers to the new block which is going to replace the old block and refers to the block tag of the new block .(4)The CS, upon receiving the parameters from , tries to authenticate as .That is,(5)If the authentication of is successful, CS replaces the old block with , and hence, the file becomes . Also, it updates the corresponding tags of the file as .

To delete a block of the file , the mobile user performs the following steps:(1)As in block modification operation, sends the parameters to the cloud server where refers to the block deletion operation(2)The CS, upon receiving the parameters from , tries to authenticate as and verifies whether the received signature matches with of the block which is to be deleted(3)If both authentication of and tag verification are successful, CS deletes the block from its storage space, and hence, the file shall exist as , and the corresponding signatures are

To insert a new file block for the file , performs the following steps:(1) sends the parameters to the CS in which refers to the block insertion operation(2)The CS, upon receiving the parameters from , tries to authenticate as (3)If the authentication is successful, CS inserts the block into its storage space, and hence, the file shall exist as , and the corresponding signatures are

6. Security Analysis of the Proposed Work

A remote integrity protocol is assumed to be secure if it exhibits the properties such as completeness, soundness, and data privacy. This section analyses the proposed protocol with regard to these essential properties.

Theorem 1 (completeness). The integrity verification done by the TPA after receiving the audit response is based on a valid proof.

Proof. In this research work, , as shown during the integrity verification process by the TPA, ensures the completeness of the proposed RDIC protocol. The proof for this equation can be given as follows:If the mobile user and the cloud server are truthful and free from deceit, then the equation should hold true.

Theorem 2 (user authentication). The authentication verification of the cloud server during audit response is done based on a valid proof.

Let us assume that a user with identity has uploaded the parameter to the cloud before uploading any file. Based on the request from that user, the TPA computes and and sends to the cloud server. Here, the validity of done by the cloud server during the integrity verification ensures that only a valid user is part of the verification process and not an intruder. In this case, is taken by the cloud server from the authentication table, and are received from the TPA. Assume that an attacker with a random value and masquerading as a legitimate user sends an audit request with and . The CS, after receiving the audit request from the attacker, retrieves from the authentication table and tries to verify the authentication as follows:

Thus, the authentication fails. Hence, the proposed system gives a valid proof only to a legitimate user during the authentication process.

Theorem 3 (soundness). In our scheme, the attacker by possessing cannot break the system and involve in audit cheat.

The parameters are vital and are shared only between and the TPA. An attacker may be an existing member of the system and would like to masquerade as user by using its , and tries construct the equation but will only fail to do so since the value of is known only to the legitimate user and the TPA. The equation in the proposed work shows that, during integrity verification, an attacker who wants to break the system should possess which are kept confidential and being shared only between and the TPA. Unless the TPA or the mobile user divulges this information, an attacker cannot guess the values of and by knowing only the pairing operation and . Moreover, since the integrity verification is done based on all the blocks under consideration, neither the CS nor the TPA can involve in fraud as the values of and cannot be easily computed if the data is corrupt in the cloud server or fiddled with by an attacker.

Theorem 4 (perfect data privacy). In our scheme, the TPA is unable to learn any information regarding the data.

A user sends to the TPA during an audit request. During the whole process of challenge generation and integrity verification, the TPA never accesses the file blocks or their signatures . During the challenge generation process, the TPA works with which do not divulge the details of any of the file block. Besides, during the verification process, the TPA verifies which does not reveal any information regarding the signatures or the file blocks. Though the value of is based on the file blocks and the value of is based on the signatures of those blocks, they are computed by the cloud server and not by the TPA. The TPA plays only the verification role. Hence, the TPA cannot learn any file information from these parameters leaving the system preserving the privacy of data during the file integrity challenge and the verification processes.

7. Results and Discussion

The proposed protocol is implemented in a machine with Windows operating system, Intel Core i5-4460 processor running at 3.20 GHz and 3.20 GHz with a primary memory of 4 GB. The experiments were conducted using pairing-based cryptography library pbc-0.5.14, and C programming language is used for the implementation purposes. Table 3 shows the security features provided by the proposed work and some notable similar works in the literature.

Let us assume that refers to the cost of an exponentiation operation, refers to the cost of a hash operation, refers to the cost of a pairing operation, refers to the cost incurred during one point addition, refers to the cost of one point multiplication, and and refer to the cost of one integer multiplication and integer addition, respectively. During the tag generation phase, tag for each of the file block is generated, and refers to the number of blocks in the file. The proposed research work is compared with the significant works proposed by Yu et al. [15], Wang et al. [20], Zhu et al. [22], and Yang et al. [23]. Though the previous works strive to ensure the integrity of the documents, they lack the user authentication factor during this process. This novel research work does both integrity verification and authentication processes. During the document integrity verification process, the major overhead incurred is due to the tag generation, proof generation, and proof verification processes. Hence, as part of this research work, Table 4 compares the computational cost of the proposed protocol with the recent protocols in the literature. The order of the group is setup as 160 bits, and the tests were conducted with a file size of 1 MB with approximately 50,000 data blocks. In Table 4, refers to the number of blocks uploaded to the cloud server and refers to the number of challenged blocks during the integrity verification process.

The computation involved in the system initialization is done only during the initialization of the system. For each user, the computation during registration is done only once during the user registration. A user may try to upload multiple files, and hence, tag generation is done only during file upload to the cloud server, and the computations for audit works will be done at regular intervals. From Table 4, it is evident that the proposed work incurs relatively less overhead than the other recent works in the literature.

For tag generation during the file upload, the mobile user makes one pairing operation and one exponentiation operation as for each block. For blocks, the user has to make pairing and exponentiation operations which are very less compared to the similar works in the literature. The results as shown in Figure 4 indicate the fact that the proposed work shows an improved efficiency with regard to the computational overhead of the previous protocols in the literature. The graph shows the comparison for the cost for a minimum of 100 blocks and a maximum of 1,000 blocks.

The mobile user incurs a computational cost of only one exponentiation and one pairing operation for the tag generation for each block. All the other recent works under comparison incur greater costs compared to the proposed work. For instance, for the tag generation of one block, the work proposed by Yu et al. involves two exponentiation operations per block, one hash operation per block, and one point addition operation which is more costly than the proposed work. From the graph, it can be inferred that, for the generation of tags for a total of 300 blocks, the proposed protocol incurs a computational overhead of 691 ms which is 599 ms less than Yu et al.’s scheme, 390 ms less than Wang et al.’s scheme, 724 ms less than Zhu et al.’s scheme, and 270 ms less than Yang et al.’s scheme.

In Figure 5, the performance of the proposed protocol for proof generation is compared with the works proposed by Yu et al., Wang et al., Zhu et al., and Yang et al., respectively. For instance, let us consider the cost for the proof generation for 700 blocks. In this case, the proposed protocol attains only a minor performance improvement like 10 ms and 41 ms than the works proposed by Yu et al. and Wang et al., respectively. But, it shows a major performance improvement such as 1,053 ms and 1,656 ms less than the works proposed by Zhu et al. and Yang et al., respectively. Thus, from the figure, we can infer that, as the number of blocks increases, the performance of the proposed protocol increases as well.

Similarly, Figure 6 shows the comparison of the computation cost for the proof verification process by the third-party auditor. The proof verification makes use of the blocks which are randomly selected by the third-party auditor during audit challenge. During the computation of , the TPA can compute the value of only once and use it for the subsequent operations as . The tests have been conducted by increasing the number of blocks from 100 to 1,000. The results clearly indicate that, for 600 blocks, the proposed protocol incurs 541 ms only which is 2,582 ms, 1,744 ms, 1,993 ms, and 1,622 ms less when compared to the works proposed by Yu et al., Wang et al., Zhu et al., and Yang et al., respectively. Thus, the proposed protocol shows improved performance than the previously proposed protocols in terms of the computational overhead.

Table 5 provides the comparison of the communication cost incurred by various protocols. In the table, refers to the size of an element in , refers to the size of an element in , refers to the size of an element with regard to the block number, and refers to the number of challenged blocks.

The communication cost is mainly due to the frequent audit challenges sent by the third-party auditor to the cloud server and the audit responses from the cloud server to the third-party auditor. The communication overhead occurs due to the messages exchanged as part of the registration process, challenge generation, and response processes. The communication overhead during the registration is not accounted in this work as it happens only once for a cloud user. For the phase of audit challenge sent by the TPA to the CS, TPA sends to the cloud. Moreover, the size of is based on the number of challenged blocks, and hence, the size of the audit challenge is in . TPA sends to the CS only for the authentication purpose. Thus, the size of the audit challenge is which is better than Yu et al. [15] and Zhu et al. [22] schemes and incurs similar overhead as that of Wang et al. [20] and Yang and Jia [23] schemes, respectively.

During the audit response phase of the proposed protocol, the cloud server sends to the TPA. The size of both and is based on the number of challenged blocks. The proposed scheme is better than Yu et al.’s scheme [15] which incurs a communication overhead of bits and is more efficient than Wang et al.’s scheme [20] which involves where incurs an additional overhead of bits compared to the proposed work. Moreover, the proposed work incurs half of the communication overhead as that of Zhu et al. [22] and is identical compared to Yang and Jia [23] scheme. Thus, the proposed protocol shows that it incurs a minimal communication overhead while providing support for efficient authentication feature and minimal computational complexity.

8. Conclusions

To sum up, a novel attack resistant and an efficient protocol for the verification of the remotely stored data in the cloud servers has been introduced in this research work. This work is the first of its kind in enabling authenticated verification procedure over the remote data stored in the cloud servers for the mobile users involved in the verification process. Enough security analysis has been provided to ascertain the genuineness of this work with regard to its resistance to the attacks in all aspects. The implementation results clearly show that this work provides auditing service with less computational complexity compared to the similar works in the recent literature. In this era of mobile computing, this work shall strive to prove its importance for sensitive files stored by the mobile users in the cloud servers. In future, this work can be further extended to support verification for the users of a corporate office or other communities who store data in multiple cloud servers.

Data Availability

No data were used to support this research work.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.