Abstract

We have addressed the complex problem of recovery for concurrent failures in distributed computing environment. We have proposed a new approach in which we have effectively dealt with both orphan and lost messages. The proposed checkpointing and recovery approaches enable each process to restart from its recent checkpoint and hence guarantee the least amount of recomputation after recovery. It also means that a process needs to save only its recent local checkpoint. In this regard, we have introduced two new ideas. First, the proposed value of the common checkpointing interval is such that it enables an initiator process to log the minimum number of messages sent by each application process. Second, the determination of the lost messages is always done a priori by an initiator process; besides it is done while the normal distributed application is running. This is quite meaningful because it does not delay the recovery approach in any way.

1. Introduction

It is known that checkpointing and rollback recovery are widely used techniques that allow a distributed computing to progress in spite of a failure [1–11]. A global checkpoint of an 𝑛 -process distributed system consists of 𝑛 checkpoints (local) such that each of these 𝑛 checkpoints corresponds uniquely to one of the 𝑛 processes. A global checkpoint 𝑀 is defined as a consistent global checkpoint (state) if no message is sent after a checkpoint of 𝑀 and received before another checkpoint of 𝑀 [4]. That is, there must not exist any orphan message between any two local checkpoints belonging to the consistent global checkpoint. The checkpoints belonging to a consistent global checkpoint (state) are called globally consistent checkpoints (GCCs).

There are two fundamental approaches for checkpointing and recovery. One is the asynchronous approach, and the other one is the synchronous approach [12]. In the asynchronous approach, processes take their checkpoints independently. So, taking checkpoints is very simple as there is no coordination needed among the processes while taking the checkpoints. After a failure occurs, a procedure for rollback recovery attempts to build a consistent global checkpoint. However, in this approach because of the absence of any coordination among the processes there may not exist a recent consistent global checkpoint which may cause a rollback of the computation. This is known as domino effect. In the worst case of the domino effect, after the system recovers from a failure all processes may have to rollback to their respective initial states to restart their computation again.

Synchronous checkpointing approach assumes that a single process other than the application processes invokes the checkpointing algorithm periodically to determine a consistent global checkpoint. This process is known as initiator process. It asks periodically all application processes to take checkpoints in a coordinated way. The coordination is done in a way so that the checkpoints taken by the application processes always form a consistent global checkpoint of the system. This coordination is actually achieved through the exchange of additional (control) messages. It causes some delay (known as synchronization delay) during normal operation. This is the main drawback of this method. However, the main advantage is that the set of the checkpoints taken periodically by the different processes always represents a consistent global checkpoint. So, after the system recovers from a failure, each process knows where to rollback for restarting its computation again. In fact, the restarting state will always be the most recent consistent global checkpoint. Therefore, recovery is very simple. Hence, compared to the asynchronous approach, taking checkpoints is more complex while recovery is much simpler. Observe that synchronous approach is free from any domino effect. The above discussion is all about determining a recovery line such that there is no orphan message in the distributed system. In this work in addition to orphan messages, we also take care of any lost and delayed messages as well. Before we go further, we have stated briefly why we need to consider these messages.

Consider a simple example of a distributed system with only two processes as shown in Figure 1(a). Process 𝑃 1 after taking the checkpoint 𝐢 1 1 sends the message π‘š to process 𝑃 2 . The receiving process 𝑃 2 processes the message and then takes its checkpoint 𝐢 2 1 and continues. Now assume that a failure 𝑓 has occurred at process 𝑃 1 . After the system recovers from the failure, assume that both processes will restart from their respective recent checkpoints 𝐢 1 1 and 𝐢 2 1 . However, process 𝑃 1 will resend the message π‘š again since it did not have the chance to record the sending event of the message. Thus process 𝑃 2 will receive it again and process it again, even though it did process it once before it took its checkpoint 𝐢 2 1 . This duplicate processing of the message will result in wrong computation. This message is called an orphan because the receiving event of the message is recorded by the receiving process in its recent local checkpoint 𝐢 2 1 ; where as its sending event is not recorded. Unless proper care is taken, if the processes indeed restart from these two checkpoints, the distributed application will result in wrong computation due to the presence of the orphan message.

Now consider Figure 1(b). As above assume that after recovery processes restart from their respective checkpoints 𝐢 1 1 and 𝐢 2 1 . Note that the sending event of the message π‘š has already been recorded by the sending process 𝑃 1 in its recent checkpoint 𝐢 1 1 , and so it will not resend it, because it knows that it has already sent the message to 𝑃 2 . However, the receiving event of the message π‘š has not been recorded by 𝑃 2 , since it occurred after 𝑃 2 took its checkpoint. As a result, 𝑃 2 will not get the message again, even though for correct operation it needs the message. In this situation message π‘š is called a lost message. Therefore, for correct operation any such lost message needs to be logged and resent when the system restarts after recovery.

Next consider Figure 1(c). It is seen that because of some reason the message π‘š has been delayed and 𝑃 2 did not even receive it before the failure occurred. Now as in the case of the lost message, if the processes restart from their respective checkpoints as shown, process 𝑃 1 will not resend it and as a result, process 𝑃 2 will not get the message again, even though for correct operation it needs the message. In this situation message π‘š is called a delayed message. Therefore, for correct operation any such delayed message needs to be logged and resent when the system restarts after recovery.

1.1. Problem Formulation

In this paper, we address the following problem: given the recent local checkpoint of each process in a distributed system, after the system recovers from a failure how to handle properly any orphan, lost, or delayed message so that all processes can restart from their respective recent (latest) checkpoints. It also means that a process will need to save only its recent checkpoint. We also handle concurrent process failures, that is, when two or more processes fail concurrently.

To fulfill our objective, we assume that processes take checkpoints periodically with the same time period to make sure the nonexistence of any orphan message. The proposed checkpointing algorithm is nonblocking. Also it is a single phase one. We also assume that the time between two consecutive invocations of the checkpointing algorithm, 𝑇 , is larger than the maximum message passing time between any two processes in the system. The importance of this last assumption will be clear when we discuss delayed and lost messages in Section 2.3. The proposed recovery approach needs to consider only lost messages with respect to the recent checkpoints of the processes.

This paper is organized as follows. Section 2 contains the system model and the necessary data structures. In Section 3, we have stated some problem associated with nonblocking approach. In Sections 4 and 5, we have described the checkpointing and recovery approaches along with their respective performances. Section 6 draws the conclusions.

2. Relevant Data Structures and System Model

2.1. System Model

The distributed system has the following characteristics [13]: processes do not share memory and they communicate via messages sent through channels; processes are deterministic and fail stop.

2.2. Relevant Data Structures

The proposed recovery approach needs the following data structures per process for its execution.

Consider a set of 𝑛 processes { 𝑃 1 , 𝑃 2 ,…, 𝑃 𝑛 } involved in the execution of a distributed algorithm. We assume that application messages are piggybacked with unique sequence numbers, that is, the π‘˜ t h application message will have π‘˜ as its sequence number. These sequence numbers are used to preserve the total order of the messages received by each process. Process 𝑃 𝑖 ’s π‘₯ t h checkpointing interval is the time between its checkpoints 𝐢 𝑖 π‘₯ βˆ’ 1 and 𝐢 𝑖 π‘₯ and is denoted as ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ). Each process 𝑃 𝑖 maintains two vectors, each of size 𝑛 at its π‘₯ t h checkpoint Cxi; these are a sent vector 𝑉 𝑖 π‘₯ ( s e n t ) and a received vector 𝑉 𝑖 π‘₯ ( r e c v ) . These vectors are initialized to zero when the system starts. These vectors are stated below.

(i) 𝑉 𝑖 π‘₯ ( s e n t ) = [ 𝑆 π‘₯ 𝑖 1 , 𝑆 π‘₯ 𝑖 2 , S π‘₯ 𝑖 3 , … , 𝑆 i n π‘₯ ] , where 𝑆 i j π‘₯ represents the largest sequence number of all messages sent by process 𝑃 𝑖 to process 𝑃 𝑗 in the interval ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ). Note that 𝑆 π‘₯ 𝑖 𝑖 = 0 .(ii) 𝑉 𝑖 π‘₯ ( r e c v ) = [ 𝑅 π‘₯ 𝑖 1 , 𝑅 π‘₯ 𝑖 2 , 𝑅 π‘₯ 𝑖 3 , … , 𝑅 i n π‘₯ ] , where 𝑅 i j π‘₯ represents the largest sequence number of all messages received by 𝑃 𝑖 from 𝑃 𝑗 in the checkpointing interval ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ). Also 𝑅 π‘₯ 𝑖 𝑖 = 0 .
2.3. Delayed Message and Checkpointing Interval

We now state the reason for considering the value of the common checkpointing interval 𝑇 to be just larger than the maximum message passing time between any two processes of the system. It is known that to take care of the lost and delayed messages the existing idea is message logging. So naturally the question arises for how long a process will go on logging the messages it has sent before a failure (if at all) occurs. We have shown below that because of the above-mentioned value of the common checkpointing interval 𝑇 , a process 𝑃 𝑖 needs to save in its recent local checkpoint 𝐢 𝑖 π‘₯ only all the messages it has sent in the recent checkpointing interval ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ). In other words, we are able to use as little information related to the lost and delayed messages as possible for consistent operation after the system restarts.

Consider the situation shown in Figure 2. As before we will explain using a simple system of only two processes, and the observation is true for distributed system of any number of processes as well. Observe that because of our assumed value of 𝑇 , the duration of the checkpointing interval, any message π‘š sent by process 𝑃 𝑖 during its checkpointing interval ( 𝐢 𝑖 π‘₯ βˆ’ 1 - 𝐢 𝑖 π‘₯ βˆ’ 2 ) always arrives before the recent checkpoint 𝐢 𝑗 π‘₯ of process 𝑃 𝑗 . Now assume the presence of a failure 𝑓 as shown in the figure. Also assume that after recovery, the two processes restart from their recent π‘₯ t h checkpoints. Observe that any such message π‘š does not need to be resent as it is processed by the receiving process 𝑃 𝑗 before its recent checkpoint 𝐢 𝑗 π‘₯ . So it is obvious that such a message π‘š cannot be either a lost or a delayed message. Therefore, there is no need to log such messages by the sender 𝑃 𝑖 at its recent checkpoint 𝐢 𝑖 π‘₯ . However, messages, such as π‘š ξ…ž and π‘š ξ…ž ξ…ž , sent by process 𝑃 𝑖 in the interval ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ) may be lost or delayed. So in the event of a failure, 𝑓 , in order to avoid any inconsistency in the computation after the system restarts from the recent checkpoints, we need to log only such sent messages at the recent checkpoint 𝐢 𝑖 π‘₯ of the sender so that they can be resent after the processes restart. Observe that in the event of a failure, any delayed message, such as message π‘š ξ…ž ξ…ž , is essentially a lost message as well. Hence, in our approach, we consider only the recent checkpoints of the processes and the messages logged at these recent checkpoints are the ones sent only in the recent checkpointing interval. From now on, by β€œlost message’’ we will mean both lost and delayed messages. Observe that without such an assumption about the value of the common checkpointing interval 𝑇 , the messages logged at 𝐢 𝑖 π‘₯ may include not only the ones which a process 𝑃 𝑖 has sent in its current interval ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ), but also those which 𝑃 𝑖 sent in the previous intervals as well.

Note that in the above discussion, we have implicitly assumed the nonexistence of any abnormally excessive delay in message communication that violates our logical assumption that any message π‘š sent by process 𝑃 𝑖 during its checkpointing interval ( 𝐢 𝑖 π‘₯ βˆ’ 1 - 𝐢 𝑖 π‘₯ βˆ’ 2 ) always arrives before the recent checkpoint 𝐢 𝑗 π‘₯ of process 𝑃 𝑗 .

3. Problems Associated with Nonblocking Approach

It is known that the classical synchronous checkpointing scheme has three phases: first an initiator process sends a request to all processes to take checkpoints; second the processes take temporary checkpoints and reply back to the initiator process; third the initiator process asks them to convert the temporary checkpoints to permanent ones. Only after that processes can resume their normal computation. In this paper, our objective is to design a single phase nonblocking synchronous approach that guarantees the nonexistence of any orphan message; however it does have some problem. We explain first the problem associated with nonblocking synchronous checkpointing approach. After that we will state a solution. The following discussion although considers only two processes, still the arguments given are valid for any number of processes. Consider a system of two processes 𝑃 𝑖 and 𝑃 𝑗 . Assume that the checkpointing algorithm has been initiated by an initiator process 𝑃 βˆ— , and it has sent a request message 𝑀 𝑐 to 𝑃 𝑖 and 𝑃 𝑗 asking them to take a checkpoint each. In our approach no additional control message exchange is necessary for making individual recent checkpoints mutually consistent. That is, in this case both processes 𝑃 𝑖 and 𝑃 𝑗 will act independently. Let 𝑃 𝑖 receive the request message 𝑀 𝑐 and take its checkpoint 𝐢 𝑖 1 . Let us assume that 𝑃 𝑖 now immediately sends an application message π‘š to 𝑃 𝑗 . Suppose at time ( 𝑑 + € ), where € is very small with respect to 𝑑 , 𝑃 𝑗 receives π‘š . Also suppose that 𝑃 𝑗 has not yet received 𝑀 𝑐 from the initiator process. So, 𝑃 𝑗 has no idea if the checkpointing algorithm has started or not and therefore it processes the message. Now the request message 𝑀 𝑐 arrives at 𝑃 𝑗 . Process 𝑃 𝑗 now takes its checkpoint 𝐢 𝑗 1 . We find that message π‘š has become an orphan due to the checkpoint 𝐢 𝑗 1 . Hence, 𝐢 𝑖 1 and 𝐢 𝑗 1 cannot be consistent.

To avoid this problem we state a very simple solution. Process 𝑃 𝑖 piggybacks a flag, say $ , only with its first application message, say π‘š , sent (after it has taken its checkpoint for the current execution of the algorithm and before its next participation in the algorithm) to a process 𝑃 𝑗 , where 𝑗 β‰  𝑖 , and 0 ≀ 𝑗 ≀ 𝑛 βˆ’ 1 . Process 𝑃 𝑗 after receiving the piggybacked application message learns immediately that the checkpointing algorithm has already been invoked; so instead of waiting for the request it takes its checkpoint first, then processes the message π‘š and later it ignores the current request when that arrives.

Note that in our approach an initiator process interacts with the other processes only once via the control message 𝑀 𝑐 . After receiving 𝑀 𝑐 , each such process, independent of what others are doing, just takes its checkpoint and sends some vectors to the initiator process and immediately resumes its computation. That is why we consider it as a single-phase algorithm.

4. The Checkpointing Algorithm

Below we describe the nonblocking algorithm. Assume that it is the π‘₯ t h invocation of the algorithm. The algorithm produces 𝑛 globally consistent checkpoints for a distributed system with 𝑛 processes; see Algorithm 1.

At each process 𝑃 𝑖 ( 1 ≀ i ≀ 𝑛 )
 if 𝑃 𝑖 receives 𝑀 𝑐
      takes checkpoint 𝐢 𝑖 π‘₯ ;
   sends its 𝑉 𝑖 π‘₯ ( s e n t ) and 𝑉 𝑖 π‘₯ ( r e c v ) to the initiator process 𝑃 βˆ— ;
      // all such vectors from each 𝑃 𝑖 are used by 𝑃 βˆ— to determine the lost messages
         sent by the processes during ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ) in the event of a failure continues its normal operation;
   else if 𝑃 𝑖 receives a piggybacked application message < m, $ > && 𝑃 𝑖 has not yet received 𝑀 𝑐
       for the current execution of the checkpointing algorithm
        takes checkpoint 𝐢 𝑖 π‘₯ without waiting for 𝑀 𝑐 ;
        sends its 𝑉 𝑖 π‘₯ ( s e n t ) and 𝑉 𝑖 π‘₯ ( r e c v ) to the initiator process 𝑃 βˆ— ;
      // all such vectors from each 𝑃 𝑖 are used by 𝑃 βˆ— to determine the lost messages
         sent by the processes during ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ) in the event of a failure continues its normal operation;
      // processes the received message π‘š and ignores 𝑀 𝑐 , when received later

Proof of Correctness
In the β€œif’’ block every process 𝑃 𝑖 takes its π‘₯ t h checkpoint 𝐢 𝑖 π‘₯ when it receives the request message 𝑀 𝑐 . That is, none of the messages it has sent before this checkpoint can be an orphan. In the β€œelse’’ block, a receiving process 𝑃 𝑖 takes its π‘₯ t h checkpoint 𝐢 𝑖 π‘₯ before processing any application message π‘š , sent by a process which took its π‘₯ t h checkpoint first before sending the message π‘š to 𝑃 𝑖 . Therefore the message π‘š cannot be an orphan as well. Since this is true for all the processes, hence the recent π‘₯ t h checkpoints 𝐢 𝑖 π‘₯ , 1 ≀ 𝑖 ≀ 𝑛 are globally consistent checkpoints.

4.1. Performance

The algorithm is a synchronous one. However it differs from the classical synchronous approach in the following sense; it is just a single phase one unlike the three-phase classical approach, it does not need any exchange of additional (control) messages to coordinate the processes except only the request message 𝑀 𝑐 , there is no synchronization delay, and finally it is non-blocking. About message complexity the initiator process broadcasts 𝑀 𝑐 only once and there is one message containing the vectors from each 𝑃 𝑖 to 𝑃 βˆ— . So the message complexity is 𝑂 ( 𝑛 ).

4.1.1. Comparisons with Some Existing Works

We use the following notations (and some of the analysis from [10]) to compare our algorithm with some of the most notable algorithms in this area of research, namely, [1, 8, 10]. The analytical comparison is given in Table 1. In this table,

𝐢 a i r is cost of sending a message from one process to another process; 𝐢 b r o a d is cost of broadcasting a message to all processes; 𝑛 m i n is the number of processes that need to take checkpoints; 𝑛 is the total number of processes in the system; 𝑛 d e p is the average number of processes on which a process depends; 𝑇 c h is the checkpointing time.

In Table 1, the first column is about blocking. In Koo and Toueg’s work, the checkpointing scheme is blocking. So unless all processes take their permanent checkpoints, any underlying distributed application cannot restart. So in the worst case, the total blocking time for the processes that need to take checkpoints is 𝑛 m i n times the checkpointing time 𝑇 c h per process. For the other works in the table, the algorithms are non-blocking. So they have zero blocking time.

For the second column, consider the work of Cao and Singhal, in the first phase a process uses two system messages while taking a tentative checkpoint. So the system message overhead is 2 βˆ— 𝑛 m i n βˆ— 𝐢 a i r . In the second phase the message overhead is min( 𝑛 m i n βˆ— 𝐢 a i r , 𝐢 b r o a d ). So the total overhead is the summation of the above two. In a similar way, the other entries can be explained. Observe that we have a single-phase algorithm, and only one type of system message (a request message) is broadcasted. Therefore 𝐢 b r o a d is just equal to 𝑛 βˆ— 𝐢 a i r .

Figure 3 illustrates how the number of control messages (system messages) sent and received by processes is affected by the increase in the number of the processes in the system. In Figure 3, 𝑛 d e p factor is considered being 5% of the total number of processes in the system and 𝐢 b r o a d is equal to 𝑛 βˆ— 𝐢 a i r . We observe that the number of control messages does increase in our approach with the number of processes, but it stays smaller compared to other approaches when the number of the processes is higher than 7 (which is the case most of the time).

5. Recovery Scheme

Our recovery approach is independent of the number of processes that may fail concurrently. In order to identify lost messages in the event of a failure, we adopt only one idea from the centralized approach [14] for message logging: all application messages are routed through the initiator process 𝑃 βˆ— . But, we differ from the centralized approach in that the messages sent to a process 𝑃 π‘˜ are logged at 𝑃 βˆ— according to the order of their arrival at 𝑃 βˆ— , and some of these messages may become lost messages in the event of a failure. This is a major difference because the approach in [14] logs copies of only those messages which have been exchanged between any two processes and for doing so it employs an acknowledgment protocol. In our work we denote this message log for process 𝑃 π‘˜ as M E S G π‘˜ , where 1 ≀ π‘˜ ≀ 𝑛 for an 𝑛 process distributed system. Another major difference is that in our work the initiator process 𝑃 βˆ— does not save the checkpoints of the 𝑛 processes. It is rather the responsibility of the 𝑛 processes themselves.

The proposed recovery scheme is dependent on the following computation done by the initiator process. Each time when the execution of the checkpointing algorithm is over, the initiator process 𝑃 βˆ— determines the possible lost messages with respect to the processes’ respective recent checkpoints which will be helpful for consistent and correct distributed computation in the event that a failure occurs before the next execution of the checkpointing algorithm. Since this computation can be performed by 𝑃 βˆ— while the normal distributed application is running, therefore we name it as the Background Computation.

5.1. Background Computation by 𝑃 βˆ—

Assume that the π‘₯ t h execution of the checkpointing algorithm has just been over. So 𝑃 βˆ— has already collected all the 𝑛 sent and 𝑛 received vectors from the 𝑛 application processes. Using these vectors 𝑃 βˆ— determines the lost messages, if any, sent by all other processes, 𝑃 𝑖 ( 1 ≀ i ≀ 𝑛 , i β‰  π‘˜ ) to each 𝑃 π‘˜ in the interval ( 𝐢 𝑖 π‘₯ - 𝐢 𝑖 π‘₯ βˆ’ 1 ) in the way shown in Algorithm 2.

 For each process 𝑃 π‘˜ and 1 ≀ 𝑖 ≀ 𝑛 , 𝑖 β‰  π‘˜
   if 𝑆 i k π‘₯ > 𝑅 k i π‘₯
     𝑃 βˆ— records these sequence numbers ( 𝑅 k i π‘₯ + 1 ) to 𝑆 i k π‘₯ in lost-from- 𝑃 π‘˜ 𝑖 ;
      // messages with sequence numbers ( 𝑅 k i π‘₯ + 1 ) to 𝑆 i k π‘₯ are
      the lost messages from 𝑃 𝑖 to 𝑃 π‘˜ .
     𝑃 βˆ— forms the total order of all lost messages sent by every 𝑃 𝑖 , 𝑖 β‰  π‘˜ to 𝑃 π‘˜
   using lost-from- 𝑃 π‘˜ 𝑖 and the message log M E S G π‘˜ for 𝑃 π‘˜ ;

5.2. Recovery

Let us assume that after the processes have taken their respective π‘₯ t h checkpoints a failure has occurred. It may be concurrent failures also. After the system recovers, initiator process 𝑃 βˆ— sends to each 𝑃 π‘˜ the lost messages, if any, following their total order which 𝑃 π‘˜ did not receive before its recent ( π‘₯ t h ) checkpoint.

Observe that a failure may occur in the 𝑛 -process system before the background computation by 𝑃 βˆ— finishes. Since as in the classical synchronous approach we assume that 𝑃 βˆ— is not faulty, so 𝑃 βˆ— will continue with its determination of the lost messages and when it is done it will send these messages to the appropriate receivers.

Theorem 1. Algorithm nonblocking together with the recovery scheme results in correct computation of the underlying distributed application. Proof. According to the checkpointing algorithm there does not exist any orphan message with respect to the recent checkpoints of the processes. Also, the initiator process 𝑃 βˆ— identifies the lost messages, if any, with respect to the recent local checkpoints of the processes and the recovery approach ensures that the lost messages are resent following their total order to the appropriate destinations after the system restarts. Therefore there does not exist any orphan or lost message with respect to the recent checkpoints. Hence the correctness of the underlying distributed computation is ensured.

5.3. Performance

The following are the salient features of our approach. First, all processes restart from their respective recent checkpoints; that is, there is no further rollback. It also means that processes save only their recent checkpoints replacing their previous ones. Second, the recovery approach is dependent on the background computation by 𝑃 βˆ— . This computation goes on in parallel with the normal computation. So it does not delay the recovery approach in any way. It appears to us as a significant advantage. Third, the recovery approach is independent of the number of processes that may fail concurrently. Fourth, the choice of the value of the common checkpointing interval 𝑇 enables to use as little information related to the lost messages as possible for consistent operation after the system restarts. About the recovered lost messages, it depends on the nature of the distributed application. These messages are computational (application) messages and have to be resent for correct computation. So they do not contribute in any way to the complexity of the recovery approach.

5.3.1. Comparisons with Some Existing Works

In [15], it is a two-phase checkpointing scheme and a process logs both sent and received messages. In our work, it is a single-phase scheme and also only the messages sent are logged. The work in [6] considers only orphan messages, where as our work considers lost and delayed messages as well. However, both the works allow processes to have minimum rollback, thus allowing minimum recomputation.

In the work [7] during normal computation each time a process receives an application message, it has to check if it needs to take a checkpoint so that the received message cannot be an orphan. In our work it is not necessary because of the checkpointing scheme. Hence we avoid some unnecessary comparisons involved in such checking. The message overhead in [7] is 𝑂 ( 𝐹 ) , where F is the number of recovery lines established, where as in our work it is absent. Note that by β€œmessage overhead’’ it is meant the size of the control information that is piggybacked with application messages which are exchanged during normal computation. Another important difference is that the work in [7] will establish a recovery line for each failure and then establish a consistent recovery line for the distributed system after the occurrence of concurrent failures. It is not needed in our work, because in our work it does not depend on if it is a single failure or concurrent failures; our recovery line always consists of the recent checkpoints of the individual processes of the system independent of single or concurrent failures.

When compared to the classical work in [16] the following differences are observed. In [16] there is always an extracontrol messages for each application message, that is, it requires receive sequence number (RSN) and acknowledgment messages in addition to the application message. We do not require it. Besides, the work in [16] has the restriction that during normal computation receiver of a message cannot send a new message until it receives the acknowledgment for the RSN it has sent to the sender of the message which it has already received. This obviously results in slower execution. Our work does not have any restriction of any kind during normal computation. Finally, we handle both single and concurrent failures where as it is only single failure in [16].

The work in [17] employs fault-tolerant vector clock and history mechanism to track causal dependencies, orphan messages, and obsolete messages to bring the system to a consistent state after failures. Our approach is very simple. Our simple checkpointing scheme makes sure that there is no orphan message. Always the consistent state is the set of the recent checkpoints of individual processes. So we do not need any extra effort to determine a consistent state.

The classical work in [18] also employs time stamp vectors to track dependencies in order to determine a consistent state; as mentioned above our approach is always domino-effect free. Also it considers only single failures and its message overhead is 𝑂 ( 𝑛 ). In our work we consider both single and concurrent failures and it does not have any message overhead.

In Table 2 we state a brief summery of comparisons of some important features of some of these checkpointing / recovery approaches.

6. Conclusions

In this work, we have proposed a checkpointing approach that is a single phase one and non-blocking in nature; besides it does not have any synchronization delay. It makes sure that at the time of recovery we do not have to deal with orphan messages unlike many of the existing works and also processes can restart from their respective recent checkpoints. The choice of the value of the common checkpointing interval 𝑇 enables to use as little information related to the lost and delayed messages as possible for consistent operation after the system restarts. The determination of the lost messages is always done a priori by an initiator process; besides it is done while the normal distributed application is running. It is meaningful because it does not delay the recovery approach in any way. Besides, the recovery approach is independent of the number of processes that may fail concurrently. Finally note that our checkpointing and recovery schemes are independent of the effect of any clock drift on the respective sequence numbers of the recent checkpoints of the processes, because we consider only processes’ recent checkpoints irrespective of their sequence numbers.