- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
International Journal of Digital Multimedia Broadcasting
Volume 2010 (2010), Article ID 720251, 11 pages
Secure P2P Data Storage and Maintenance
EURECOM, 06560 Valbonne Sophia Antipolis, France
Received 1 August 2009; Revised 4 December 2009; Accepted 17 January 2010
Academic Editor: Golnaz Karbaschi
Copyright © 2010 Nouha Oualha et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
P2P data storage requires strong reliability and security assurances. Existing data storage solutions have been designed for centralized as well as distributed settings; yet they do not address the security and cooperation issues raised by self-organization. P2P systems also introduce new needs regarding data availability due to the dynamicity of the infrastructure, which are unaddressed so far. This paper first discusses the approaches for tackling these problems. A solution is then introduced that relies on self-organizing security mechanisms in conjunction with a data rejuvenation scheme using erasure codes.
P2P file sharing systems (e.g., KaZaA  and BitTorrent ) aim at fairly distributing storage services to the whole community of peers. P2P data storage solutions like Wuala , AllMyData , and UbiStorage  have recently appeared with similar scalability objectives. Such approaches unfortunately do not take the selfish nature of peers into account, even though free riding is a well-known fact in unregulated P2P file storage applications. Selfishness is however much more critical for data storage in that selfish peers may destroy the data they are supposed to store on a long-term basis.
Techniques for remotely detecting data corruption or destruction have been analyzed in [6–10], and dedicated cryptographic primitives have been proposed. These studies however do not consider the need for a self-organizing data restoration mechanism, which is however critical after data corruption has been detected in order to achieve storage reliability and availability on the long term.
The dynamic nature of P2P systems exacerbates these issues in data storage applications because peers can sometimes be offline while their data still need to be preserved within the network. Achieving scalability requirements in that context requires data storage and verification tasks to be delegated, which has not been addressed in existing systems.
This paper is structured as follows: the requirements for a secure P2P data storage and maintenance mechanism are first introduced and its five main phases are described. A threat model is then presented that outlines possible attacks that may compromise such a protocol. A new solution is then introduced for achieving self-organizing remote data integrity checking mechanisms in conjunction with an erasure code-based data rejuvenation scheme. The security of this scheme is finally discussed together with data availability and reliability, the latter being analyzed using an analytic model of the maintenance process.
2. Problem Statement
We consider in this paper a P2P storage system in which a peer, the data owner, can store its personal data at other network peers. These peers, named holders, should store the data until the owner requests its retrieval.
In order to achieve both secure and efficient data storage and maintenance in a dynamic and autonomous system like a P2P network, the solution must ensure the following requirements.
Since current and future P2P storage systems rely on an autonomous communication design, peers take the role of both owners and holders. Therefore, the proposed solution should be designed for a self-organizing environment whereby all operations including data maintenance are performed by simple peers and without the help of any authority. Since no security server or trusted server can be assumed to be present, security solutions such as peer authentication or data integrity using certificate-based authentication mechanisms have to be revisited.
The performance of P2P storage strongly relies on the cooperation of peers. Still, peers are assumed to have limited resources, in particular in terms of storage space or bandwidth available. The scarcity of such resources inherently incites peers to behave selfishly if they are assumed to be rational, even though it should be noted that altruistic behaviors are frequently witnessed in P2P file sharing infrastructures for instance. Selfish behaviors can have a strong impact on the performance of the network. In order to reduce their effect, collaboration among peers must be encouraged.
The main goal of a P2P storage system is of course to guarantee the potential retrieval of data. Since data are not stored in a centralized server and since P2P networks are assumed to be very dynamic, the stored data should be available even if peers may leave the network. Data availability can be increased with data redundancy techniques.
Data should not only be available at any time but also preserved on the long term. Data integrity thus has to be ensured and, in case of errors, a peer should be able to detect and repair the compromised data.
2.2. Threat Model
The different attacks the P2P data storage and maintenance mechanism is exposed to are the following.
The destruction of data stored at a holder must be detected as soon as possible. Destruction may be due to generic data corruption or to a faulty or dishonest holder. This threat may result from a voluntary and malicious action or from free riding. In the latter case, peers may not cooperate to the storage service yet claim that they do so. Such cheating holders should be discouraged through remote data integrity verification.
The storage phase is exposed to man-in-the-middle attacks whereby a malicious holder asks another honest holder to store the data for himself and to also generate the response to each challenge that could be generated by the verifier. This way, seems honest although it does not store any of the data.
Collusion attacks aim at taking unfair advantage of the storage application. There are two possible attacks. First, replica holders may collude so that only one of them stores data, thereby defeating the purpose of replication to their sole profit. Second, a dishonest verifier may collude with a data holder to help it answer honest verifiers’ challenges without storing data.
Denial-of-Service (DoS) Attacks
DoS attacks aim at disrupting the storage application. The introduction of security and reliability mechanisms may bring up new ways to attack the storage service, in particular through denial of service attacks (DoS). For instance, the holder may either be flooded by verification requests from dishonest verifiers, or from attackers that have not been delegated by the owner. Malicious verifiers may also flood the network with useless repair messages. In order to prevent the latter type of attacks, a threshold number t of honest verifiers is defined: there should be at least t verifiers that detect such a problem in the verification phase before generating a new data replica. All such attacks should also be prevented, or at least mitigated.
Attacks against the Repair Phase
During the repair phase, holders may cheat by performing a bogus data rejuvenation. Verifiers may also play a part in constructing such bogus data.
2.3. An Overview of Existing Approaches
A storage mechanism consists in mainly two phases which are data storage, whereby the owner stores some data at one peer, and data verification, whereby it verifies that the data is actually stored. However, in order to discuss all the requirements described above and to address all the above threats, we further refine the storage service into five sequential phases (see Figure 1): during the selection phase, potential holders of the data are elected by the data owner who later on stores its data at these holders during the storage phase. The owner then appoints verifiers for its remote data during the delegation phase and these verifiers periodically check the availability and integrity of the stored data during the verification phase. Whenever these verifiers detect any data destruction or corruption, the repair phase is activated, in which the verifiers generate a new copy of the data with the help of the remaining holders. These phases are described below in more detail.
Selection is the process through which peers that are asked to store data are elected. The goal of the selection phase is to choose a set of peers that can maintain data availability while consuming minimal bandwidth. There are two possible techniques for holder selection. A discriminatory selection determines specific peers chosen such that they satisfy some constraint (e.g., they exhibit a correct behavior as described in ) or such that they share similar characteristics with the owner like their online availability or dedicated bandwidth (as illustrated in ). In contrast, random selection is generally used for its simplicity since it is less sophisticated and since it consumes less bandwidth per peer. TotalRecall  and the P2P storage system described in  rely on a distributed-hash-table (DHT) to randomly select data holders. The selection is realized by randomly choosing a value from the DHT address space and routing to that value. The authors of  claim that the random selection mitigates some type of preset collusion between these holders. Similarly,  analyzes peer selection strategies and proves the positive effects of randomization through the study of a stochastic model of a P2P system under churn. After holders have been selected, the owner can directly contact them for data storage. To mitigate the problem of peers having multiple identities as first described as a Sybil attack in , peers joining the system may pay with computational, bandwidth or storage abilities, such as for example crypto-puzzles in  (the reader may refer to  for an exhaustive survey of counter techniques to the Sybil attack).
Once peers which will store the data have been selected by the owner, the latter should send the data to these potential holders. Data availability can be ensured either by implementing some form of redundant storage, through either replication or erasure coding. With replication, a simple copy of the data is distributed to each selected peer. With erasure coding, a data is instead divided into several blocks and additional blocks are generated to ensure data reconstruction as soon as a given number of blocks are retrieved. Replication, which has been mostly used in DHTs, more seriously increases the storage overhead and maintenance bandwidth without a comparable increase in fault tolerance. In contrast, erasure codes offer a better balance between the storage overhead and fault tolerance achieved. Many storage systems like [3, 4, 13, 19] rely on the latter. The overhead introduced by data redundancy can however be coped with. For instance, Wuala  reduces the remote storage space allocated to a peer in exchange of an equivalent local storage space based on its probability of being online: the unallocated storage space serves for trading space on other peers in order to achieve a redundant storage . Erasure codes are more complex than replication and in particular, the maintenance of coded data blocks introduces additional computational costs since it requires performing the coding yet again. Communication costs are also needed to retrieve a minimum number of coded blocks from several holders. A tradeoff between storage requirement and data maintenance must be determined when considering the use of erasure codes or replication:  for instance describes how quantitative simulation might help in doing so. Moreover, in the case of replication, since the size of the data can be very large, holders may not cooperate and cheat on storing the data. They may even collude and ensure that only one holder is storing the data for all the other selected ones. Data personalization has been introduced as a solution to this threat: the owner generates a single and different replicate of the data for each holder and ensures that the response to the challenge during the verification phase is also different (e.g., [6, 21]). This type of collusion may also arise with erasure coding even though it becomes problematic only if the number of colluding holders exceeds the number of original data blocks.
As previously described, the storage mechanism should ensure that the data is continuously available and that holders are rightly claiming to be storing the data assigned to them. The verification phase relies on specific challenge-response protocols that achieve remote data integrity verification. P2P networks being very dynamic, the owner cannot be assumed to be always online, in particular if the storage service is used for backup purposes. During periods where the owner is not present in the network, data verification should still be ensured by owner’s delegates. We term such peers as verifiers. The owner provides verifiers with some information about the data that we call the security metadata, and which serves as a basis for remote data integrity checking. Verification is itself a self-organizing process that may entail peer selfishness, which is mitigated through the distribution of delegated verification to a set of verifiers that are also incented to cooperate. The distribution of that verification also improves performance through load balancing.
Some storage systems like  rely only on data redundancy to achieve data availability. The degree of data redundancy in the network is frequently adjusted in function of locally estimated network dynamic properties. However, such systems provide probabilistic guarantees for data availability that may be sufficient if the degree of data redundancy is considerably high. Otherwise, deterministic guarantees should be provided by regularly checking storage at data holders. P2P storage systems generally use timeouts/heartbeats to detect peer failures. For example, OceanStore  utilizes Tapestry (DHT) that uses periodic user datagram protocol (UDP) probes to gauge link conditions in order to detect servers’ failures. When the redundancy level for some data has dropped below a critical level, servers owned by OceanStore service providers (OSPs) trigger the recreation and a new dissemination of lost data blocks. Actually, the redundancy level may fall below the acceptable level because of the maliciousness of storage servers without being detected. Probes to sense peer failures are not sufficient to detect storage default at untrusted holders. A new type of challenge-response protocol has been proposed to tackle the problem of remotely proving the integrity of some data stored by a holder (e.g., [6–9, 23]) as plain hash functions for instance are not enough. Even though these new cryptographic primitives prevent the generation of correct responses, a cheating holder may simply not reply to a verifier’s challenge thereby pretending to be offline or crashed. Distinguishing between permanent failures, malicious or not, and transient ones is difficult. This is generally handled through the use of a grace period during which the verifier waits for challenges to be answered before declaring the holder as faulty.
Detecting that one of the holders has cheated and does not store the data anymore should trigger a data rejuvenation operation in order to ensure data availability. The verifier in charge should select another peer to perform the required operations to store the data and to generate the corresponding security metadata. Given the dynamicity of P2P networks, such operations should not rely on the presence of the owner. Additionally, the cooperative behavior of the peers participating in the repair operations should be stimulated. The recovery may be triggered almost immediately after the detection of a cheating or delayed holder. Simulation results of  demonstrate that delayed repair (lazy repair) is more efficient in terms of data availability and overhead costs tradeoff than immediate repair (eager repair) for a large data size and a highly dynamic system.
Since holder selection has been widely studied in the literature [11–15], this paper focuses on storage, verification, and repair phases. We introduce a new data storage and maintenance scheme that does not rely on a centralized entity nor require the data owner to be online during data maintenance.
2.4. Contributions of this Paper
The contributions of the paper cover several challenging issues that have not been addressed in previous work on P2P storage.(i)In addition to peer-to-peer storage, which is not new in itself, the scheme proposed in this paper aims at realizing both the monitoring and maintenance services in a self-organized way. To our knowledge, the proposed scheme is the only one which ensures data integrity and offline maintenance at the same time. As opposed to existing reliability mechanisms , the new protocol does not require the presence of the data owner for maintenance. (ii)The security of the proposed scheme does not rely on the presence of trusted entities, but rather on a fault-tolerant and threshold-based approach to handle the maliciousness and selfishness of data holders. This paper in particular makes use of an epidemic model to validate the solution proposed. That model proves that the P2P storage system converges to a stable state that fulfills data reliability and availability requirements thanks to the proposed maintenance scheme which is operated.
3. Erasure Coding-Based Storage and Maintenance
This section introduces a new data storage and maintenance protocol for P2P storage systems. Like other systems (e.g., [3, 4, 13, 19]), the proposed protocol uses erasure coding rather than replication for performance reasons (see Section 3.4). In contrast with these approaches, however, the proposed protocol combines a data integrity check protocol with the offline maintenance. To our knowledge, the presented protocol is the only maintenance method proposed so far that takes into consideration the maliciousness of holders (compared to ) and that additionally does not require the intervention of the data owner (e.g., ) or a trusted server (e.g., ).
3.1. Cryptographic Background
The security of the following protocol relies on a cryptographic function that realizes a homomorphism. The function is used to map each generated erasure coding block to some piece of metadata that is used for verifying the integrity of the block. For instance, for a coded block , the corresponding metadata is that should be as well a block digest to be stored at appointed verifiers.
The homomorphism is used to preserve linear combination, that is, for each set of erasure coding blocks and random coefficients we have
There are several possible implementations of the homomorphic function for data integrity verification purposes. It can be constructed based on the Discrete Logarithm problem like in  or . It may also rely on elliptic curves as proposed in .
3.2. Scheme Description
Only the four phases of the proposed protocol are described in this section. The selection phase can simply rely on a random selection as suggested in .
As discussed in Section 2, data have to be stored at multiple peers in order to ensure data availability and reliability. Secure data storage with the simple replication technique has been proposed and evaluated in . Since the use of erasure coding techniques provides the same level of reliability as replication but with much lower storage requirements at holders, a new storage mechanism based on erasure codes is proposed. At this phase (Figure 2), the data is first divided into blocks . These blocks are then encoded to produce coded blocks such that at least any of these blocks enable the recovery of the original data. Blocks are coded using random linear erasure codes . With such erasure codes, the generating matrix is defined in as where denotes the identity matrix and denotes a random matrix in (i.e., entries of are chosen randomly). Each coded block is generated using the following linear operations in : where the is an entry of G at the ith row and jth column. The coded block is finally sent by the owner to one distinct holder that is named the ith holder.
The owner delegates (in Figure 3) the verification task of one coded block to a number of verifiers (it is possible that a verifier may be responsible of several coded blocks). The verifier assigned to the ith holder receives from the owner a metadata information such that where is an integer that maps to the coded block stored at the holder (we call th verifier, one verifier of the set of verifiers assigned to the ith holder).
Based on the stored metadata, the verifier is able to periodically check whether the holder stores the block or not (in Figure 4). This verification only guarantees the storage of one block and therefore considered as partial. The verifier first sends a challenge message to the holder where is a freshly generated random number. Upon reception of the challenge, the holder computes and sends the product to the verifier. The verifier checks if the equality holds. Indeed, thanks to the homomorphism of the function , holder’s response can actually be written as If the latter equality is not met, the verifier detects that the block has been either corrupted or destroyed by the holder. A reaction to this event should result in the regeneration of a new block to replace the lost one. This operation is performed in the next phase.
To activate this phase, a fraction of verifiers assigned to a given holder consisting of at least peers detect the destruction of the block stored at the very holder. They first select a new holder randomly as explained in . The new block is generated based on a coding operation over blocks. The coding operation is executed by the new holder (Figure 5). This latter receives verified blocks from randomly selected set of the remaining holders. The verifiers also agree on a seed that will be sent to the new holder. The seed can be simply computed as a sum of random numbers each one of them chosen by each verifier. The seed allows to generate random coefficients . The new holder then computes the new block in as follows: The new generated block can be written as a linear combination of the original data blocks. Indeed, since each block transmitted by the holders participating in the generating process can be written as a combination of the original data blocks, then: Thus, As a result, the generated block is coded based on the random linear erasure coding scheme.  demonstrates that any submatrix of a random matrix is invertible with high probability for a large field size; thus the property of erasure coding is still provided by the new block.
Moreover, the new block is distinct from the lost block and the remaining blocks stored in the system as well. The indexes of the holders involved in the redundant block generation process along with the seed must be kept stored by the verifiers before being handed out to the owner allowing this latter to update the generating matrix of the erasure codes. If the block has been destroyed, the update only affects the ith row of the matrix: the new row is defined as where for each in : Each verifier assigned to the revoked holder keeps its role as a verifier for the new holder. However, it requires new metadata information for the new coded block (Figure 6) that is computed as a linear combination over the metadata information stored at other verifiers (responsible of the holders that have been involved in the block generation) and using the same set of coefficients: The new metadata corresponds to the new block stored at the new holder; this is realized owing to the homomorphism of :
The owner and verifiers that have been offline for a while must contact the rest of verifiers to update their list of holders and all related information whenever they reconnect to the network. This update process can be automated for instance if verifiers periodically exchange information with each another. To guarantee a consistent update process, a sufficient number of (at least ) verifiers should be contacted before the update is effective. Alternatively, a tracking server might be established by a trusted authority such as a service provider: that tracker would collect and maintain the location of blocks and address of data holders, similarly to P2P file sharing trackers  that point at content repositories. Upon request, the tracker might then provide such information to crashed data owners or to new verifiers. Solutions based on the establishment of a DHT to keep track of data holders might also be envisioned in order to implement such a holder repository in a self-organized way, although such an approach would certainly open the door to Sybil or pollution attacks similar to what P2P file sharing systems experience.
3.3. Security Evaluation
This section analyses the security of the proposed data storage and maintenance protocol with respect to the attacks discussed in Section 2.2.
Preventing Data Destruction
Each verifier checks the availability of one remote coded block. The destruction of any block is detected through the periodic verification. The verification protocol is a proof of knowledge protocol. Indeed, the protocol is complete thanks to the homomorphism of f as explained earlier, that is, the verifier always accepts the proof as valid if the holder follows the protocol. The protocol is sound, that is, the verifier will not accept the proof as valid if the holder destroys or corrupts the block. Because we assume that the homomorphism is also one-way, the holder cannot infer from the challenge and should keep the whole block to correctly answer verifier’s challenge. The verification protocol constructed using elliptic curves has been proved as being a proof of knowledge protocol in .
With erasure codes, the produced blocks that will be stored at holders inherently differ from each other; even though collusion between or more holders may succeed. This type of collusion is however less likely because it requires the exchange of blocks (comparable in size to the original data) to one of the holders to encode the destroyed block. This entails considerable bandwidth and computation costs for each verification operation.
A new generated block with the repair phase differs from the remaining stored blocks. This is guaranteed with the randomization added by the seed that is chosen cooperatively by the verifiers to limit any collusion between them.
Preventing DoS Attacks
A quota system can be introduced into the protocol to regulate the number of challenge messages the verifier is allowed to send to a given holder during a time frame. This allows mitigating a flooding attack against the holder launched by a malicious verifier.
The activation of a repair phase is made possible only with the association of at least verifiers. The distribution of the work to independent verifiers allows mitigating the maliciousness of some of them that may flood the system with repair requests. The threshold value is a tradeoff factor between these two considerations: prevention against verifier collusion and also handling peer intermittence.
Preventing Attacks against the Repair Phase
A holder or verifier may send bogus information to the concerned peers. We argue that this problem can be easily thwarted by including in any information signature that provides proofs of origin and integrity for the recipient of such information. Each coded block or metadata is associated with some owner signature that attests its validity.
On the other hand, after a data repair phase, the new holder or the new verifier will keep track of a compilation of all necessary owner signatures that validates the new generated block or metadata. For example, a new holder can keep along with the freshly coded block , the seed used to generate coefficients for the new block, and also the following set of information: having coded based on the blocks and the signature generated by the owner. When the owner reconnects to the network and updates its holder related information (list of holders, generating matrix), it makes contact with the new holder, checks the validity of its signature compilation and replaces it with its corresponding signature:
To simplify such process and particularly to reduce the signature overhead, the protocol may alternatively rely on a homomorphic signature (e.g., algebraic signature ) that provides the following property: Thus with this type of signatures, the new holder or verifier can construct a valid signature for the new generated block or metadata based on the generating blocks or metadata’ signatures and without having recourse to the owner.
Preventing Man-in-the-Middle Attacks
The holders of a given data are selected randomly. As suggested in , a data owner cannot choose by itself the identities of its data holders. This means that the owner have necessarily a different key ID in the DHT distinct from the key IDs of its holders. These holders are then contacted directly for data storage and verification. To prevent a man-in-the-middle attack, the response of a holder storing the block to a verifier’s challenge may be constructed as a digest of the metadata along with holder’s identity ID: , ID) having hash a pseudorandom one-way function. The peer’s ID can correspond to the peer’s IP address, which is forgeable but may still make it possible to establish a secure channel between two peers if we assume no attack on the routing protocol. With this construction of the response, an attacker cannot trick a verifier by pretending to be storing the block while holding at the same time an identity different from ID.
3.4. Performance Evaluation
In the proposed protocol, the performance of both the delegation and verification phases in the particular case of elliptic curves has been already evaluated in . Since metadata are only computed based on one block instead of the whole data, the performance in fact improves.
The proposed repair method requires the transmission of exactly coded blocks (amounting to the file size) only once for the regeneration of one block and provides a personalized regenerated block to the new holder.
The bandwidth required for the repair is distributed between holders. Furthermore, the communication overhead of the proposed repair method can be optimized by relying on hierarchical codes as proposed by . With such erasure codes, the required number of coded blocks to repair a block equals at least the number of children in the tree hierarchy and at most equals to .
Any other communication overhead caused by verifier agreement and notification messages can be considered negligible owing to the fact that the data (or the block) is considerably larger.
The linear combination of blocks is operated in which may lead to an increase in the size of the produced block by at maximum k bits. We argue that this increase is insignificant given the original size of blocks.
4. An Analytic Model for P2P Data Storage and Maintenance
This section introduces an analytic model describing the P2P storage system inspired from the epidemic models in  where peers are classified into groups depending on their state. The epidemic model allows to relate peer dynamics (e.g., peer leaves, disconnections) at the network population level to holders’ behavior (data conservation or destruction) and data maintenance at peer level. Thanks to the use of such model, it is easy to prove whether the system finally converges to a stable state (i.e., equilibrium) or not. With such model, we endeavor to determine the right periodicity for data maintenance.
We consider an owner that replicates its data at holders using a ()-erasure coding. The original data can be generated from at least k coded blocks. The owner also delegates the verification of each coded block to v verifiers. These verifiers have the responsibility to periodically check the presence and integrity of the stored block at their assigned holder. Whenever at least verifiers detect the destruction or corruption of the block they decide to regenerate a new block and store it at a new holder. We assume that verifiers do not require to be replaced often during the data storage. The owner is supposed to connect to the P2P system from time to time in order to select new verifiers and to appoint them to the desired blocks.
4.1. Model of P2P Data Storage without Data Maintenance
Figure 7 depicts a state model of the uptime of holders in the P2P system. We consider three states: “connected”, “disconnected”, and “left”. Initially, the owner appoints some connected peers to be holders of its data. These holders may afterwards disconnect or definitely leave the system. Peer disconnection and peer departure rates, which are respectively named and , are considered constant. Disconnected holders may reconnect at constant rate . Additionally, holders do not just leave the system after a crash but may also pertain to state “left” if they destroy blocks they store.
In this first model, we consider that holders that have destroyed blocks are not replaced (no data maintenance). We characterize each state by the number of holders at a given time t that happen to be in such state at . For a given time , such number has a random distribution of values but the value of its average is quantifiable based on the epidemic model. We designate the average number of holders in states “connected”, “disconnected”, and “left” at time by, respectively, , , and , with the total number of holders being
The model does not describe the system at a stable state but shows if there is convergence of the system to such equilibrium. Indeed, the average number of holders in each state varies with time according to the differential equations derived from the state model. There are holders that come into the “connected” state when they reconnect and holders that depart from that state when they disconnect, destroy data, or even leave definitively the network:
Holders that go offline go into the “disconnected” state and holders that go, or destroy their data, or leave the network depart from that very state:
Holders that leave or maliciously destroy data enter into the “left” state:
The solution of these equations gives the average number of holders in each state at time t. Initially (at ), all holders are in the “connected” state:
Thus, we obtain
The number of holders in the system converges to zero with time. This means that there is a certain time at which the owner’s data is not available any more (i.e., is the time limit for data availability), and there is another time at which the owner cannot retrieve its data from the storage system (i.e., is the time limit for data reliability). The time limit is defined as
To simplify the above equation, the exponential functions can be rewritten as infinite power series (in the form of Taylor series) that can be approximated to their first order. A solution is obtained for
The time limit is obtained if
This leads to:
4.2. Model of P2P Data Storage with Data Maintenance
If we consider that destroyed blocks are detected and regenerated at other new holders, we obtain a new state model (depicted in Figure 8). This model describes a repair phase for destroyed blocks during which new holders are introduced in the model at a constant rate (1/ is also the verification time period). We assume that verifiers can distinguish transient failures (i.e., disconnection) from permanent failures (i.e., data destruction or leave). The number of new holders that are regularly added to recover from data destruction depends on the number of holders in “left” state which consists of holders that either have maliciously destroyed data or have definitively left the network. The frequency at which such new holders are added depends on the rate upon which “left” holders are detected, that is, the maintenance (verification) rate.
The number of holders in each state verifies the following differential equations derived from the model (in Figure 8):
Compared to (15)–(17), the new (24) take into account the fact that new online holders with number depending on the number of blocks detected being lost are added to the system, that is, there are peers that depart from the “left” state and come into the “connected” state.
The number of the total considered holders increases with time since we regularly add new holders with data maintenance:
The solution to these differential equations gives the number of holders in each state:
The above equation gives a precise bound on the data maintenance period :
4.3. Numerical Simulation
We simulated the above model of a P2P data storage system in different scenarios based on the equations developed earlier. In the simulation, peers join the system for an average lifetime of 2 weeks. Each peer stays online for 1 hour and connects on average 6.4 times in a day. Additionally, holders are assumed to destroy their blocks one time per day on average.
Without data maintenance, the owner’s data is not available after only 49 minutes and then cannot be recovered after less than 2 days. With maintenance however, the data is always available and retrievable. But data maintenance should be periodically performed 3.8 times per day.
Figure 9 shows the average number of holders that are still storing data blocks with time in different settings. The figure illustrates the fact that without maintenance () the number of holders decreases converging to zero. On the other hand, with maintenance , the number of holders converges to an equilibrium value that is not null. This value depends on the ratio: if , then the value is lower than ; otherwise it is higher than k thus rendering the restoration of destroyed blocks possible. Our results prove that with the data maintenance mechanism, the P2P storage system is able to achieve survivability for stored data.
Figure 10 shows the number of online holders over time computed with and without a repair phase. Considering maintenance at a rate and with chosen in function of system parameters, the system can ensure a high data availability. In the case without data maintenance, however, blocks required to recover the data are not accessible anymore over time.
This paper introduced a P2P data storage and maintenance protocol in which the detection of data corruption and data rejuvenation are self-organizing functions.
Data corruption detection in our approach relies on the combination of homomorphism-based remote data integrity verification techniques with the generation of smaller blocks through erasure coding. Detecting data corruption is critical for preventing both selfish behaviors, which reduce the effectiveness of the storage service, and malicious attacks, which are aimed at ruining it. The techniques presented in this paper specifically make it possible to lessen verification costs through the delegation of verifications to various peers.
This protocol is the first to use peer cooperation not only for providing storage resources, but also for ensuring storage resilience. In particular, the main verification and maintenance functions of this protocol are distributed to multiple peers, which makes it easier to mitigate noncooperative behaviors while coping with churn. This protocol also ensures data availability and reliability through the use of erasure codes, which serve as a basis to ensure both storage diversity and data rejuvenation while enabling to decide on the tradeoff between redundancy and storage overhead.
- KaZaA, http://www.kazaa.com/.
- BitTorrent, http://www.bittorrent.com/.
- Wuala, http://wua.la/en/.
- AllMyData Tahoe, http://allmydata.org/.
- UbiStorage, http://www.ubistorage.com/.
- N. Oualha, M. Önen, and Y. Roudier, “A security protocol for self-organizing data storage,” in Proceedings of the 23rd International Information Security Conference (SEC '08), vol. 278 of IFIP International Federation for Information Processing, pp. 675–679, Milano, Italy, September 2008.
- G. Ateniese, R. Burns, R. Curtmola, et al., “Provable data possession at untrusted stores,” in Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 598–609, Alexandria, Va, USA, 2007.
- Y. Deswarte, J.-J. Quisquater, A. Saïdane, et al., “Remote integrity checking,” in Proceedings of the 6th Working Conference on Integrity and Internal Control in Information Systems (IICIS' 04), Fairfax, Va, USA, 2004.
- F. Sebe, J. Domingo-Ferrer, A. Martínez-Ballesté, Y. Deswarte, and J.-J. Quisquater, “Efficient remote data possession checking in critical information infrastructures,” IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 8, pp. 1034–1038, 2008.
- A. Juels and B. S. Kaliski, “PORs: proofs of retrievability for large files,” Cryptology ePrint Archive, Report 2007/243, June 2007.
- R. R. Dingledine, The free haven project: design and deployment of an anonymous secure data haven, M.S. thesis, Massachusetts Institute of Technology, Cambridge, Mass, USA, June, 2000.
- L. Toka and P. Michiardi, “Analysis of user-driven peer selection in peer-to-peer backup and storage systems,” in Proceedings of the 2nd ACM-Valuetools International Workshop on Game Theory in Communication Networks (GameComm '08), p. 428, Athens, Greece, October 2008.
- R. Bhagwan, K. Tati, Y. Cheng, S. Savage, and G. M. Voelker, “TotalRecall: system support for automated availability management,” in Proceedings of the 1st ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI'04), San Francisco, Calif, USA, 2004.
- N. Oualha and Y. Roudier, “Reputation and audits for self-organizing storage,” in Proceedings of the 1st Workshop on Security in Opportunistic and Social Networks (SOSOC '08), Istanbul, Turkey, September 2008.
- P. B. Godfrey, S. Shenker, and I. Stoica, “Minimizing churn in distributed systems,” SIGCOMM Computer Communication Review, vol. 36, no. 4, pp. 147–158, 2006.
- J. R. Douceur, “The Sybil attack,” in Proceedings of the 1st International Workshop on Peer-to-Peer Systems (IPTPS '02), Cambridge, Mass, USA, 2002.
- V. Vishnumurthy, S. Chandrakumar, and E. G. Sirer, “KARMA: a secure economic framework for P2P resource sharing,” in Proceedings of the Workshop on the Economics of Peer-to-Peer Systems, Berkeley, Calif, USA, June 2003.
- B. N. Levine, C. Shields, and N. B. Margolin, “A survey of solutions to the Sybil attack,” Tech. Rep. 2006-052, University of Massachusetts Amherst, Amherst, Mass, USA, October 2006.
- S. Rhea, G. Wells, P. Eaton, et al., “Maintenance-free global data storage,” IEEE Internet Computing, vol. 5, no. 5, pp. 40–49, 2001.
- H. Weatherspoon, B.-G. Chun, C. W. So, and J. Kubiatowicz, “Long-term data maintenance in wide-area storage systems: a quantitative approach,” Tech. Rep. UCB/CSD-05-1404, EECS Department, University of California, Berkeley, Calif, USA, 2005.
- M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows, and M. Isard, “A cooperative internet backup scheme,” in Proceedings of the Usenix Annual Technical Conference (General Track), pp. 29–41, San Antonio, Tex, USA, June 2003.
- C. Leng, W. W. Terpstra, B. Kemme, W. Stannat, and A. P. Buchmann, “Maintaining replicas in unstructured P2P systems,” in Proceedings of the 4th International Conference on Emerging Networking Experiments and Technologies (CoNEXT '08), pp. 1–12, ACM, 2008.
- D. L. G. Filho and P. S. L. M. Barreto, “Demonstrating data possession and uncheatable data transfer,” Cryptology ePrint Archive, Report 2006/150, 2006, http://eprint.iacr.org/.
- S. Acedański, S. Deb, M. Médard, and R. Koetter, “How good is random linear coding based distributed networked storage?” in Proceedings of the 1st Workshop on Network Coding, Theory, and Applications (NetCod '05), Riva del Garda, Italy, April 2005.
- T. S. J. Schwarz and E. L. Miller, “Store, forget, and check: using algebraic signatures to check remotely administered storage,” in Proceedings of the 26th IEEE International Conference on Distributed Computing Systems(ICDCS '06), July 2006.
- A. Duminuco and E. W. Biersack, “Hierarchical codes: how to make erasure codes attractive for peer-to-peer storage systems,” in Proceedings of the 8th IEEE International Conference on Peer-to-Peer Computing (P2P '08), Aachen, Germany, September 2008.
- D. S. Jones and B. D. Sleeman, Differential Equations and Mathematical Biology, Allen & Unwin, London, UK, 1983.