Abstract

Cryptanalytic time memory tradeoff algorithms are tools for inverting one-way functions, and they are used in practice to recover passwords that restrict access to digital documents. This work provides an accurate complexity analysis of the perfect table fuzzy rainbow tradeoff algorithm. Based on the analysis results, we show that the lesser known fuzzy rainbow tradeoff performs better than the original rainbow tradeoff, which is widely believed to be the best tradeoff algorithm. The fuzzy rainbow tradeoff can attain higher online efficiency than the rainbow tradeoff and do so at a lower precomputation cost.

1. Introduction

Cryptanalytic time memory tradeoff algorithms are tools for inverting generic one-way functions. They are actively used by law enforcement agencies and hackers to recover passwords protecting accesses to digital documents and to obtain system login passwords from the stored password hashes. After a one-time precomputation phase, whose computational complexity order is typical as that of an exhaustive computation of the one-way function on all inputs under consideration, a digest of the computation is written to a table of size that is of much smaller order than the complete dictionary. In the online phase, referencing the precomputation table, the input corresponding to a given inversion target is recovered with a computational complexity that is of much smaller order than that of an exhaustive trial of inputs. These algorithms allow tradeoffs to be made between the size of the precomputation table and the expected time for inversion through adjustments of various algorithm parameters.

The first time memory tradeoff method was the classical algorithm by Hellman [1] and this was soon followed by the distinguished points variant. Rivest is given credit [2, page 100] for suggesting to apply the notion of distinguished points to the classical Hellman tradeoff. Currently, the rainbow tradeoff [3] is the most widely used algorithm.

The fuzzy rainbow tradeoff [4, 5] is a more recent algorithm that combines the distinguished point and rainbow methods. The algorithm has already been used in the multitarget setting as an integral component of a fully functional attack [6, 7] on GSM phones, and an elementary analysis of the fuzzy rainbow tradeoff appeared in [8]. The latter work cites a work related to [6] and refers to the attack by the name Kraken, but none of these works cite the original publication [4, 5], indicating these to be an independent line of work. In fact, the analyses of [8] fall short of even the preliminary discussions given by [4, 5]. For now, the execution complexities of the fuzzy rainbow tradeoff are not known accurately enough for the purpose of comparing the performances of different tradeoff algorithms.

The fuzzy rainbow tradeoff, as with most other tradeoff algorithms, comes in the nonperfect table and perfect table versions. The perfect table version is expected to perform better during the online phase than the nonperfect version, but this must be paid for with a larger precomputation effort. Our previous work [9] gave an accurate performance analysis of the nonperfect table fuzzy rainbow tradeoff and compared the results with the performances of the original nonperfect and perfect table rainbow tradeoffs, which are widely believed to be the best tradeoff algorithms. The conclusions made there were that the nonperfect fuzzy rainbow tradeoff was always advantageous over the nonperfect rainbow tradeoff and that, while the perfect rainbow tradeoff could achieve somewhat better online efficiency than the nonperfect fuzzy rainbow tradeoff, for online efficiency levels that could be reached by both algorithms, the nonperfect fuzzy rainbow tradeoff could do so with a smaller amount of precomputation.

In this work, we analyze perfect table fuzzy rainbow tradeoff algorithm to present its performance accurately and compare this with the performances of the perfect rainbow tradeoff and the nonperfect fuzzy rainbow tradeoff. Our conclusion in rough terms is that the perfect fuzzy rainbow tradeoff outperforms the two comparison algorithms. This implies that the perfect fuzzy rainbow tradeoff, which has not yet received widespread recognition, is preferable to all the well-known tradeoff algorithms. We remark that the analysis given in this paper is completely different from that of our previous paper [9], which dealt with the nonperfect table case.

One clarification must be made concerning the subject algorithm of this work. The fuzzy rainbow tradeoff, as originally presented by [4, 5], was a tradeoff algorithm designed to be used in the multitarget setting. This is where the attacker is given multiple inversion targets and is deemed successful if he is able to recover the input corresponding to at least one of targets. However, our analysis of the algorithm in this paper will be done under the single-target setting.

Recall that a simple multitarget adaptation of the original rainbow tradeoff is quite inferior in performance [10] to the existing multitarget adaptations [11] of the classical Hellman and distinguished point tradeoffs and that the fuzzy rainbow tradeoff was designed to be a variant of the rainbow tradeoff that performs at a similar level. We have done some preliminary investigations and believe that it will not be difficult to transform the existing analysis results for the Hellman and distinguished point algorithms that were claimed for the single-target setting and the results of the present paper to the multitarget setting. In fact, we expect the existing equations concerning algorithm performances to remain essentially valid under the multitarget setting. Nevertheless, this transformation still requires a nontrivial amount of work and is relegated to a separate future work that focuses on the multitarget versions of the tradeoff algorithms. The current work will stay within the single-target setting.

The rest of this paper is organized as follows. In Section 2, we quickly review the fuzzy rainbow tradeoff algorithm and fix the notation. The execution behavior of the perfect table fuzzy rainbow tradeoff is fully analyzed in Section 3. This is a highly technical section and is the main contribution of this paper. Some experimental data that support the theoretical findings of this section are given in the appendix. In Section 4, we combine the results of our analysis with the existing analyses of other tradeoff algorithms to compare their performances. This could be more valuable to the practitioner than the details provided by Section 3. Finally, this paper is summarized in Section 5.

2. Preliminaries

Let us review the terminology concerning the fuzzy rainbow tradeoff algorithm and fix our notation. The reader is assumed to be familiar with the basic theory of the time memory tradeoff technique. In particular, we assume knowledge of the precomputation phase and online phase algorithms of the distinguished point (DP) and rainbow tradeoffs. The few sections in the beginning of [12] could be helpful in recalling these basics.

Throughout this paper, the one-way function  to be inverted is taken to act on a search space    of size  . We fix-many reduction functions and let  denote the composition of the one-way function and the reduction functionof theth color. The number of colorswill typically be in the range.

The structure of the precomputation matrix for the fuzzy rainbow tradeoff is a combination of those of the rainbow tradeoff and the DP tradeoff. One fixes a distinguishing property of probabilityand generates precomputation chains of the form which could be referred to as a fuzzy rainbow chain. That is, one iterates the one-way functionof a fixed coloruntil the first appearance of a DP and the ending point of this DP subchain is used as the starting point for the next DP subchain. The color of the iteration function is changed at each intermediate DP until one reaches the end of theth DP subchain. In short, each iteration of a rainbow chain is replaced by a DP chain, except that the number of colors used by each precomputation table isand that the expected chain length of each DP subchain is.

Any implementation of an algorithm that relies on DPs to terminate a task must employ a mechanism to detect chains falling into loops. Typically, a bound on the chain length is set, and chains reaching this bound are discarded, possibly to be replaced with newly generated chains. We will assume the chain length bound is large enough to make the discarding of chains very infrequent, so that any effect the discarding may have on the algorithm performance can be ignored.

This paper deals with the perfect table version of the fuzzy rainbow tradeoff and the number of ending points for each perfect table is set to. That is, one generates sufficiently many precomputation chains for each precomputation table, so thatnonmerging precomputation chains can be collected. As with any tradeoff algorithm, theordered pairs, each consisting of a starting point and an ending point, are sorted according to the ending points and recorded as the precomputation table. A total of    precomputation tables are created during the precomputation phase.

The reader is cautioned to distinguish between the terms precomputation table and precomputation matrix while reading this paper. A precomputation table consists of just the starting and ending point pairs of the precomputation chains, whereas a matrix consists of all points of the precomputation chains, including the intermediate points that are not written to the table. The precomputation matrix is mentally visualized as a collection of chains, with some of them possibly merging into each other in the nonperfect case, rather than as a structureless set of points.

One can regard a single nonperfect fuzzy rainbow precomputation matrix as a concatenation of-many nonperfect DP submatrices. We will write  to refer to theth DP submatrix () residing within a single nonperfect fuzzy rainbow matrix and use  to denote the number of distinct points contained therein. The ending points of one DP submatrix become the starting points of the following DP submatrix, and the only difference between a standard nonperfect DP matrix and any    is that the latter may contain duplicate starting points that lead to completely identical chains, should one insist on treating them as separate chains. The expected number of distinct starting points and ending points for  is written asand, respectively. In particular,is the number of starting points that are initially used in creating a perfect fuzzy rainbow matrix, andis the number of distinct terminal ending points of the fuzzy rainbow matrix.

The process of removing chain merges during the precomputation phase requires further clarification. The creation of a precomputation table for the perfect fuzzy rainbow tradeoff begins with a choice ofstarting points and the generation of the first nonperfect DP submatrix  . After the generation of each nonperfect DP submatrix  , the chains are sorted according to the ending points of  and duplicate ending points are located to remove chain merges. Specifically, from each group of merging chains, one retains the chain with the longestth color DP chain segment and discards the other chains. We denote the resulting (temporary) perfect DP submatrix as  . The set of ending points from  is identical to the set of ending points from  , and these are used as the starting points for the next nonperfect DP submatrix. For an appropriate choice of, which will be discussed later, the final perfect DP submatrix  is expected to containnonmerging chains. The collection of all DP chains in  that eventually reach one of theDP chains that remain in  is denoted by. In particular, we have  , and only the elements of the final perfect DP submatricescan contribute to the success of inversions.

The method for handling merges explained above does not make reference to the total lengths of the chains and relies only on theth DP chain segment lengths. We chose to work with such a merge removal rule, because it allowed existing results concerning the perfect DP tradeoff to be used during our analysis of the perfect fuzzy rainbow tradeoff. However, since some readers may object that it is more reasonable to base the merge removal rule on the total chain lengths, let us present two remarks concerning this matter.

First, we argue that the choice of merge removal method is not very important for the fuzzy rainbow tradeoff. Note that the rule for selecting one chain from a set of merging chains was an important issue for the perfect DP tradeoff that required attention because the chain lengths of a DP matrix form a geometric distribution. However, the lengths of the fuzzy rainbow chains form a distribution that very quickly approaches the normal distribution asis increased. Intuitively, this is to be expected, since the concatenation of multiple DP chains will create an averaging affect. In fact, it is not difficult to work out the distribution explicitly and verify the claim directly. Hence, the variation in fuzzy rainbow chain lengths is small, and the impact of choices based on chain lengths on the performance of the fuzzy rainbow tradeoff can only be limited. Furthermore, the averaging effect implies that, except at smallvalues, our merge removal rule that references just theth DP chain segment is likely to return the chain that is the longest in overall length.

Second, we question whether it is reasonable to retain the longer chains in the first place. The practice is widely accepted with the perfect DP tradeoff, because it is expected to bring about higher success rate for the same amount of storage use. However, the approach also increases both the number of false alarms and the average cost of resolving each false alarm. Although we strongly believe that the positive effect of choosing longer chains on the success rate is likely to outweigh its negative effects on the online cost, currently, there is no publicly available theoretical argument or experimental evidence to support such a claim. A separate detailed study would be required to arrive at a definitive answer concerning this matter.

This completes our description of the precomputation phase for the perfect fuzzy rainbow tradeoff. To the reader with some experience in the tradeoff technique, the online phase algorithm should now be mostly obvious from the structure of the precomputation matrix. Given an inversion target, for each precomputation table and starting color, one generates a partial fuzzy rainbow chain that starts from theth color. If the terminal DP of this online chain can be found among the ending points of the precomputation table, the corresponding starting point is used to regenerate the precomputation chain, which could possibly return the correct input to the inversion target. However, most of the collisions will turn out to be false alarms, in which case the regeneration of the precomputation chain may be stopped at the DP for the color from which the online chain was started.

Some clarifications must be made concerning the order in which the online chains are created. In short, the multiple precomputation tables of the fuzzy rainbow tradeoff are processed in parallel, in a manner similar to the approach taken by the rainbow tradeoff. In practice, a round-robin style method can be used to simulate the parallel treatment of tables with even a single CPU, and this modification will not have a visible effect on the computational complexity, unless a smallis used.

Let us make this more explicit. The online phase of the fuzzy rainbow tradeoff is performed indiscrete steps. On thest step, the online DP chains for theth colors, corresponding to the    precomputation tables, are generated. All generated alarms are resolved before one moves onto thend step. On theth step, fuzzy rainbow chains that start from theth colors, for the   precomputation tables, are generated, and all resulting alarms are treated. The online phase is terminated when either the correct answer to the inversion target is found, or allsteps have been completed. Even though it is likely for the answer to be obtained in the middle of the processing of someth step, our analysis will assume that any step that has been initiated is fully completed, regardless of whether the answer has been secured. The effect of this simplification on the analysis results will be small, unless a smallis used.

Our analysis of the perfect fuzzy rainbow tradeoff will frequently utilize two approximation techniques. The first is the approximation. As explained in [12, Appendix A], this is appropriate when. The second technique is the approximation of a sum over a large index set into a definite integral. Both of these approximations will be very accurate, whenever we use them, as long as the tradeoff algorithm parameters are chosen reasonably. Throughout this paper, we will ignore multiplicative factors ofsize and write approximations of such order as equalities.

3. Analysis of the Perfect Table Fuzzy Rainbow Tradeoff

In this section, we analyze the online efficiency and the storage optimization issues for the perfect table fuzzy rainbow tradeoff. The expected computational complexity, rather than the worst case complexity, is computed and the effects of false alarms are fully taken into account. We always assume that the parameters,, and, for the perfect fuzzy rainbow tradeoff, are chosen to satisfy, with a matrix stopping constantthat is neither too large nor very close to zero.

3.1. Number of Color Boundary Points

Let us consider a nonperfect fuzzy rainbow matrix created fromstarting points and its nonperfect DP submatrices. For each, the collection ofpoints that form the boundaries of the nonperfect DP submatrices will be referred to as theth color boundary points of the nonperfect fuzzy rainbow matrix.

Our previous work [9] stated the relation and gave the iterative formula for computing, whereis the matrix stopping constant for the nonperfect fuzzy rainbow matrix. The work also derived the closed-form approximation under the assumption thatis large, and claimed this to be accurate for even smallvalues. The claimed accuracy is verified once more through experiments for parameters of our interest in the Appendix, and we will assume (4) is sufficiently accurate for the purpose of this work in the remainder of this paper.

Let us rewrite (4) in terms of the perfect fuzzy rainbow tradeoff parameters, so that the expression is more suitable for this work.

Lemma 1. To create a perfect fuzzy rainbow matrix containingnonmerging chains, one must expect to generatechains. Furthermore, the number ofth color boundary points in the nonperfect fuzzy rainbow matrix generated during this process is expected to be for.

Proof. Substitutinginto (4), we know that a nonperfect fuzzy rainbow matrix created withstarting points is expected to containnonmerging chains, where. The requirement ofmay be written as Solving this equation for, we can rewrite it as which is equivalent to the first statement of this lemma. Substituting the first claim and the above equation into (4), we find and this is the second claim of this lemma.

The first two displayed equations appearing in this proof both imply thatis always satisfied, which is similar to the situation with perfect rainbow tradeoffs. The reader may have guessed that takingvery close tocorresponds to making bad parameter choices. In fact, we will later observe in Section 4.4 thatis bounded sufficiently away fromfor any meaningful parameters and that the precomputation requirement grows unrealistically large asapproaches.

Because of its frequent appearances in the remainder of this section, we will introduce the notation When a smallis in use, the symbolshould be understood as designating the middle term, with the second equality interpreted as an approximation. However, we will mostly take the final term as the definition of, assumingto be sufficiently large, and use this notation even forand. Sinceis bounded away fromfor all practical parameter sets, we may assumeto be oforder. Our use of the lower case letter, rather than, is meant to serve as a reminder thatis not oforder.

One can directly verify from the definition that Combination of the middle expression with the knowledge ofimplies, for, and the right-hand side expression similarly implies the same claim, for.

3.2. Probability of Success

An expression for the probability of success of the perfect fuzzy rainbow tradeoff is obtained in this subsection. We first define and present a formula for the precomputation coefficient of the algorithm.

Proposition 2. The precomputation phase of the perfect table fuzzy rainbow tradeoff is expected to require   iterations of the one-way function, where the precomputation coefficient is

Proof. Since each DP chain is expected to be of length, on average, the computation of each temporary submatrixfrom itsdistinct starting points requiresiterations of the one-way function. The effort of sorting the ending points of, so that duplicates can be removed and the distinct starting points for the next submatrix are obtained, is of order, which is much smaller than the effort of generating the submatrix, and can be ignored. Taking account of the    tables, the cost of precomputation can be stated as Applying Lemma 1, we can write this as to obtain the claimed formula.

Let us use the notation wheredenotes the number of distinct points expected in theth submatrix of a perfect fuzzy rainbow matrix, and define the coverage rate of a perfect fuzzy rainbow matrix to be Note that the definitions allow us to expect bothandto be oforder.

The coverage rate (15) of a perfect fuzzy rainbow matrix may be computed fromandthrough the following formula.

Lemma 3. The coverage rate of the DP submatrixis given by

Proof. Recall thatis a subcollection of the chains appearing in. Note that the selection of chains fromto be retained independs on the behavior of the chains that extend out from the ending points ofand is independent of theth submatrix itself. In other words, the chains ofhave been selected at random from the chains of. Hence, the averages of chain lengths contained inandwill be the same, and we can make the crucial observation that In other words, the coverage rate ofis equal to the coverage rate of.
Now, recall that eachis simply a normal perfect DP matrix and also recall from [13] that the coverage rate of a perfect DP matrixmay be computed as whereis the matrix stopping constant for the perfect DP matrix ofending points. Thus we can write as claimed.

We are now ready to state the success probability of the perfect fuzzy rainbow tradeoff as a function of the algorithm parameters.

Proposition 4. Consider an input to the one-way function that is chosen uniformly at random from the input space. Given the image of this input under the one-way function as the inversion target, the online phase of the perfect table fuzzy rainbow tradeoff will succeed in recovering the original input with probability

Proof. The probability of success one can expect from the online processing of a single DP submatrixis. Since the submatrices were generated by different reduction functions, we may treat them as being independent. Thus, the probability of success for the complete online phase, taking all the tables into account, may be written as This can be approximated by as claimed.

Given any set of parameters, one can compute the success rate by combining the above formula with definition (15), Lemma 3, and notation (9). More precisely, the success rate may be seen as a function of,, and, rather than as a function of the more basic constant and parameters  , , , , and.

The proposition also shows that one can fix positive integerandto any value and still attain any success rate requirementby adhering to the relation Note that this implies that, unless the requirement for the probability of successis unrealistically close to, we will have  . In other words, the parameters    and    will be of the same order.

3.3. Online Complexity

The time memory tradeoff curve for the perfect fuzzy rainbow tradeoff is obtained in this subsection through a careful computation of the average case online execution complexities. This is the most complicated part of this paper.

We start by assessing how likely each step of the online phase is to be executed.

Lemma 5. The probability for an online chain that starts from theth color of a perfect fuzzy rainbow matrix to be generated, that is, the probability for theth DP submatrixto be searched for the correct answer, is whereis as given by Proposition 4.

Proof. The online chain that starts from theth color of a perfect fuzzy rainbow matrix will be generated if and only if the correct answer to the inversion target does not belong to the submatricescontained in the    perfect fuzzy rainbow matrices. Hence, the probability under consideration is The equality claimed in the lemma statement follows from an application of Proposition 4.

The cost of generating the online chains is a direct corollary to this lemma. It suffices to realize that an online chain that starts from theth color is expected to be of lengthand that there are    tables to consider.

Proposition 6. The generation of the online chains for the perfect fuzzy rainbow tradeoff is expected to require iterations of the one-way function.

Our next goal is to obtain the cost of resolving alarms that appear during the online phase. This part calls for very delicate arguments involving random functions and is very technical. The practice-oriented reader can skip the following few lemmas and jump to Proposition 10.

We will consider an online chain that starts from theth color and treat the case of it merging into the fuzzy rainbow matrix within theth color and the case of it merging at a strictly later color separately. As a preliminary result, we require the probability for an online chain to merge into a fuzzy rainbow matrix.

Lemma 7. An online DP chain segment of theth color will not merge into the nonperfect DP submatrixwith probability. The probability for an online chain that starts from theth color not to merge into the perfect fuzzy rainbow precomputation matrix is.

Proof. An online DP chain segment of theth color will escape merging into the nonperfect DP submatrixwith probability where the approximation is justified by the factsand. Substituting (2) into this expression and then applying (9), the probability can be written as
The probabilities for an online chain to merge intoandare usually different, but the two are the same when. Now, an online chain that starts from theth color does not merge into the perfect precomputation matrix if and only if none of its DP chain segments merge into the submatrices, and this happens if and only if the online DP chain segments do not merge into the nonperfect submatrices. Here, we emphasize that a series of submatrices ending at the terminalth color is being considered. The claimed probability is the product of the termover.

The following lemma gives the cost of dealing with an alarm from a possible delayed merge, assuming the generation of an online chain that starts from theth color.

Lemma 8. Consider a single perfect fuzzy rainbow precomputation matrix and its associatedcolors. Assume the generation of an online chain for this matrix that starts from theth color. The cost of resolving an alarm that may be induced by a possible merge of this online chain into the fuzzy rainbow matrix strictly after theth color is expected to be iterations of the one-way function.

Proof. We require the probability for an online chain that starts from theth color to merge into the perfect fuzzy rainbow precomputation matrix without merging into the submatrix. Such a merge could occur through the following two separate events. (a) Theth color online DP chain segment does not merge into the nonperfect submatrix, but the online chain part that extends out from the ending DP of theth color segment merges into the perfect precomputation matrix. (b) Theth color online chain merges into, without merging into, in which case the online chain is destined to merge into the perfect precomputation matrix at a later color.
If theth color online DP chain segment does not merge into, the chain that extends from its ending point may still be iterated with a random function. Hence, it is clear from Lemma 7 that the probability for the (a)-event to occur is
On the other hand, the extended part of the online chain is allowed no randomness in the (b)-event. We already know from the proof of Lemma 3 that an ending point of, that is, an ending point of, has probabilityof remaining among the ending points of. Once again, Lemma 7 allows us to claim as the probability for the (b)-event to occur. The probability for a merge to appear strictly after theth color is the sum of the above two probabilities.
A merge of the online chain into the precomputation matrix that appears strictly after theth color requires one to regenerate the associated precomputation chain up to the end of theth color. Note that (14) allows us to write the average chain lengthof each DP submatrixas. Hence, to resolve the alarm from the merge discussed above, one must expect to computeiterations of the one-way function.
The claimed cost of resolving alarms is a simple product of the merge probability and the work factor we have already stated.

The cost of dealing with an immediate merge of the online chain into the perfect precomputation matrix is given next.

Lemma 9. Consider a single perfect fuzzy rainbow precomputation matrix and its associatedcolors. Assume the generation of an online chain for this matrix that starts from theth color. The cost of resolving an alarm that may be induced by a possible merge of this online chain into the fuzzy rainbow matrix within theth color segment is expected to be approximately iterations of the one-way function.

Proof. Arguments given in the proof of Lemma 8 already show that the probability for a merge to occur within theth color segment into the perfect DP submatrix is Since the online chain will merge into at most one precomputation chain in, it only remains to find out how much work is required to resolve such a merge.
An alarm will require the associated precomputation chain to be regenerated at least up to the start of theth DP submatrix, and we saw during the proof of Lemma 8 that this costsiterations of the one-way function. The number of additionalth color segment iterations that are required must be handled more carefully. One can expect this to be larger than, because longer precomputation chains are more likely to be involved in merges.
The work [12] stated in its Lemma 12 that the cost of resolving alarms expected from the processing of a single nonperfect DP table isiterations of the one-way function, whereis the matrix stopping constant of the nonperfect DP matrix. Following the arguments given in the proof of the same lemma, one can write the number of false alarms expected during the processing of a single DP table as Computing the ratio of these two numbers, we can conclude that, during the processing of a single nonperfect DP table, each alarm calls foriterations of the one-way function to resolve, on average.
Now, since (10) implies that, we know that only a small portion ofis discarded in creating, so that the DP matricesandmust be similar in their distributions of chain lengths. We also saw during the proof of Lemma 3 that the selection offromdoes not affect the distribution of chain lengths. Hence, it is reasonable to expect a merge of the online chain within theth color segment into eitherorto call for approximatelyiterations of theth colored one-way function, on average.
We wish to add a small tweak to theclaim. Note that the average chain length of a nonperfect DP matrix is, whereas that ofis. Since the average chain length may be understood as a concise representation of the distribution of chain lengths, one might expect chain lengths, and one might expect, which take the average chain length ofinto account, to be a better approximation of the additional work than. In any case, since Lemma 3 implies that, we know thatknow thatandare very close to each other. In similarity, we choose to takeas the number of extrath color iterations required, since this will make our later formulas look slightly simpler.
The claimed cost of resolving alarms is a simple combination of the three factors,, andthat we have discussed so far.

Combining Lemmas 8 and 9, we can state that the cost of dealing with an alarm that may be induced by an online chain generated from theth color is where we have relied on (10) to replacewith. Since, unlike other results of this paper, we have stated Lemma 9 only as an approximation, let us briefly explain that this is still very accurate. Using relation (10) and the rough approximationsand, it is easy to verify the facts This shows that, unlessis very small, the first factor of (35) is at leasttimes larger than the second factor. On the other hand, the proof of Lemma 9 shows that the error given by formula (35) will be more than sufficiently bounded by its second factor. Hence, we can expect (35) to be accurate up to afactor, at the worst. In fact, our testing to be presented in the Appendix confirms that the accuracy of (35) is much higher than what we have claimed here.

The computational cost of dealing with the alarms is now a direct corollary to Lemmas 5, 8, and 9.

Proposition 10. The treatment of alarms during the online phase of a perfect fuzzy rainbow tradeoff is expected to require iterations of the one-way function.

The online computational complexity of the perfect fuzzy rainbow tradeoff can be stated as the sum of its two components stated by Propositions 6 and 10. Recalling from the discussion given under (23) that, it is easy to argue thatis oforder. The following time memory tradeoff curve is a direct consequence of our knowledge secured of the online time complexityand the observation that the storage complexity is.

Theorem 11. The time memory tradeoff curve for the perfect table fuzzy rainbow tradeoff is , where the tradeoff coefficient is

As a corollary to Lemma 5, we can state that the online phase of the perfect table fuzzy rainbow tradeoff is expected to call for lookups to the precomputation table. One can easily verify that this is oforder, which is much smaller than the online computational complexity.

3.4. Storage Optimization

The storage complexity appearing in the tradeoff curve of Theorem 11 refers to the total number of entries that need to be stored in the precomputed tables. However, practical interest would be in the size of the required physical recording media expressed in number of bits. Hence, we need to discuss the number of bits occupied by each starting and ending point pair in the tables.

The naive approach would allocate  bits to each table entry, but there are more efficient methods. Each starting point can be recorded inbits [1416] through the use of consecutive starting points. In storing the ending points, one need not record bits that can be recovered from the definition of the distinguishing property [15, 17]. One can create an index table [15] for each sorted table and remove almost most significant bits from each ending point. Finally, one could simply truncate the ending points [5, 15] to a certain length before recording them, at the expense of dealing with additional false alarms arising from partial matches.

We will not explain the details of the methods mentioned above, since readable descriptions may be found in [12, 13]. Let us just provide the explicit relation between the degree of truncation and the amount of online computation increased by the associated false alarms.

Proposition 12. Assume the use of the ending point truncation method in which the probability for two truncated randomly chosen DPs to be identical is. Then, during the online phase of the perfect fuzzy rainbow tradeoff, one can expect to observe extra invocations of the one-way function induced by truncation-related alarms.

Proof. The probability for the    online chains that start from theth color for each precomputation matrix to be generated is given by Lemma 5. The probability for such a generated chain not to merge into the perfect fuzzy rainbow precomputation matrix is given by Lemma 7. The probability for a nonmerging online chain to cause a truncation-related alarm with any one of the truncated ending points isand there areof these ending points, each of which could require separate treatment. Each alarm will requireiterations of the one-way function to resolve. Taking the    precomputation matrices into account, the claimed formula is a simple combination of the facts mentioned so far.

The cost stated above can easily be checked to be oforder. One can show, either by comparing this against or through heuristic arguments, that the additional cost of resolving alarms induced by the ending point truncation technique can be suppressed to a negligible level by having the truncation retain slightly more thanbits of information for each ending point. The arguments appearing in [12, 13] may then be repeated, almost word for word, to conclude that each entry of the perfect table fuzzy rainbow tradeoff can be recorded inbits, whereis a small positive integer.

The final conclusion made in the previous paragraph has not appeared before in the literature for the perfect fuzzy rainbow tradeoff. However, this is an expected result that has been obtained through straightforward adaptations of the arguments given in [12, 13].

4. Algorithm Comparison

This section may be slightly difficult to understand in full if the reader is unfamiliar with the approach that was recently introduced by [12] to compare different tradeoff algorithms and its further developments made by [13, 18]. However, we will refrain from providing lengthy repetitive explanation and justification of the comparison approach and ask the more interested reader to refer to the cited articles.

4.1. Overview of the Comparison Method

Recall that the conclusion of [13], in overly simplified terms, was that the perfect rainbow tradeoff algorithm is superior to all the other widely known tradeoff algorithms. Also recall from our recent work [9] that the nonperfect version of the fuzzy rainbow tradeoff, which is not yet widely known, is preferable to the perfect rainbow tradeoff when one is constrained in precomputation resources. Hence, we wish to compare the performance of the perfect fuzzy rainbow tradeoff, which we have analyzed in this paper, against those of the perfect rainbow and the nonperfect fuzzy rainbow tradeoffs.

In the remainder of this paper, we will use symbolsandto extend the notation we had been using concerning the perfect fuzzy rainbow tradeoff to the perfect rainbow and nonperfect fuzzy rainbow tradeoffs, and we will use the symbol    when we wish to reference a coefficient without making the tradeoff algorithm specific. For example, symbolsanddenote the tradeoff coefficients for the perfect rainbow and nonperfect fuzzy rainbow tradeoffs, respectively, andrefers to any tradeoff coefficient. The symbols,, andwill also be used as in,, andto clarify that the parameter or some complexity value is to be associated with a certain algorithm.

We will follow the approach of [12] in comparing the performances of different tradeoff algorithms against each other. In short, a small number of success rate requirementswill be chosen, and a graph for each algorithm, displaying the upper level tradeoff between its precomputation cost and online cost, will be plotted. The overall relative positions of the curves corresponding to different algorithms, subject to the same, will allow certain conclusions to be made concerning algorithm performances. The curves themselves will also be of value when choosing algorithm parameters for implementation.

It is clear that the precomputation costof a tradeoff algorithm can be numerically represented in full by the precomputation coefficient. In the perfect fuzzy rainbow tradeoff case, this can be computed from Proposition 2, and the corresponding results for our two comparison target algorithms can be found in [9, 13]. Note thatpresents the computational cost as the number of iterations, in multiplies of  , required of the common target one-way function, regardless of the tradeoff algorithm.

The online cost or efficiency of a tradeoff algorithm is mostly captured by its tradeoff coefficient. However, since theappearing in the definitionrepresents the number of table entries, rather than the physical number of bits required to store the tables, the tradeoff coefficient cannot be used directly in comparing different algorithms. One must first adjustto account for the differences in number of bits required per table entry by the algorithms being compared. For example, the comparison of the nonperfect DP and rainbow tradeoffs by [12] was carried out with the relatively adjusted tradeoff coefficientsand. The adjustment factorreflects the fact that the DP tradeoff requires roughly half as many bits to store each precomputation table entry in comparison to the rainbow tradeoff, under parameter choices that are typically considered during theoretical analyses of the tradeoff algorithms.

4.2. Tradeoff Coefficient Adjustment

We wish to be slightly more careful in treating the tradeoff coefficient adjustment factors than focusing on just the theoretically typical parameters. In general, as a trivial extension of the approach given by [12], one can use the adjusted tradeoff coefficient to represent the online cost of each algorithm, and plot theversuscurves to obtain a fair comparison of different algorithms. Here, the constantserves the purpose of bringing the adjustment factor toorder at typical parameters and is not an essential factor for the purpose of comparisons. The tradeoff coefficients,, andmay be computed from Theorem 11 and the corresponding results from [9, 13], but more work is required before we can specify the number of bits part more concretely for each algorithm.

Recalling the contents of Section 3.4 and Lemma 1, the adjusted tradeoff coefficient for the perfect fuzzy rainbow tradeoff may be written more concretely in terms of algorithm parameters as Referring to results from [9, 13], it is not difficult to work out similar adjustment factors for the perfect rainbow and nonperfect fuzzy rainbow tradeoffs to claim as the more concrete expressions.

The small positive integercorresponds to the number of ending point bits remaining after applications of the truncation and index file techniques. Working with Proposition 12 and corresponding results from [13, 18], one can reasonably argue thatfor the three algorithms can be set to a common value, which is why we have not subscripted them with the algorithm symbols. The terms involvingand, appearing in (43) and (44), have their roots in the merge removal process for producing perfect tables and reflect what fraction of the initially generated chains remains in the perfect table. Since the-curves are to be plotted usingandas parameters, the existence of these terms will not cause any later difficulties. However, the threevalues require further attention.

Note that one cannot hope to simply make independentvalue choices that achieve optimality for each algorithm, since optimality cannot be defined objectively. The most favorable balance between precomputation cost, storage cost, and expected online inversion timeis a subjective matter that would be different for every implementer and situation. For our algorithm comparison to be fair, the values,, andneed to be left as choices to be made by the implementer. Nevertheless, we can still restrict the three choices to be made in a reasonably correlated manner. The natural approach is to correlate the choices through the requirement that the,, andcomplexities for the three algorithms be made roughly comparable and with the strict restriction that the success rates of the three algorithms be identical. Since the performances of the three algorithms commonly satisfy, if an implementer under a specific situation is asked to produce parameters sets for the three algorithms that he deems favorable, his choices for the three algorithms would be roughly matching the performance figures,, and.

Let us now choose to view each positive integeras presenting a separate version of the perfect fuzzy rainbow algorithm. That is, we treat the perfect fuzzy rainbow tradeoff as a series of infinitely many different algorithms. A similar view will be taken of the nonperfect fuzzy rainbow algorithm. Below, we will describe a rule for correlating the parameter sets among these two infinite series of algorithms and the perfect rainbow tradeoff algorithm, for each fixed common probability of success.

We first take integersandsuch that  and set the perfect fuzzy rainbow parameters to for each, where the coverage rate, corresponding to each, is to be computed through (9), Lemma 3, and (15). The perfect rainbow tradeoff parameters corresponding to the above are set to where, and in such a way that is an integer. Finally, the nonperfect fuzzy rainbow parameters are set to for each, whereandis the coverage rate for the nonperfect fuzzy rainbow tradeoff.

Note that the requirement does not affect the implementer’s control over the--tradeoff in any way. Further note that one has sufficient control over,, and, even whenandare fixed to approximate values. That is, one can vary the-curve parameters,, andquite freely while keeping all thevalues somewhat stable.

Using (23) and similar claims from [9, 13], it is trivial to verify that all parameter sets achieve the same success rate. It is also easy to verify, using the facts,, andand the corresponding facts from [9, 13], that the performance figures,, andare of similar order for all the above parameter sets. In fact, it is possible to argue that the above method is the only possible manner in which comparable,, andcomplexities can be achieved by the different algorithms at a common success rate.

The reasonable association between the parameter sets for different algorithms given by (46), (47), and (48) allows the adjusted tradeoff coefficients to be written as where we have relied onin writing (50). We have also additionally subscripted the (adjusted) tradeoff coefficients with the parameterto make their dependence onmore explicit.

To compare the performances of different tradeoff algorithms, it now suffices to fix, choose reasonable values forand, and plot the-curves, for all the algorithms.

4.3. Choosing the Color Count

Before comparing the perfect fuzzy rainbow tradeoff with the other two algorithms, we wish to make comparisons among the many versions of the perfect fuzzy rainbow tradeoff corresponding to differentchoices. A small number of the-curves are given in Figure 1.

The left-hand side box contains plots of thepoints, for the case when the success rate is set toandis set to. Each of the three curves is for specificvalues. The right-hand side box contains similar curves for theandcase. Note thatwould be a reasonable value in view of the discussions given by Section 3.4 and thatandare choices that would typically be made during theoretical discussions of the tradeoff technique for the very small  and the very large  search space sizes. Hence, ourandexamples are representative of the behaviors at both ends of the realistic tradeoff application environments.

Each curve has been plotted precisely to its lowest point. The curve points that would appear to the right of a lowest point are meaningless, since they correspond to parameter sets that call for higher precomputation efforts while achieving lower online efficiencies than the parameter sets corresponding to the lowest point.

One may roughly associate overall better performance with a curve that is closer to the lower left corner of each figure box. However, it can be seen that curves corresponding to differentvalues may cross over each other. In fact, we could verify that the lowest curve in each box crosses over the middle curve appearing in its box at a highvalue. Hence, one cannot claim onevalue to be providing definitely superior performance over anothervalue, at least not strictly logically.

Nevertheless, if we restrict our attention to the lower parts of the curves, which are of higher practical interest than the highparts, it becomes clear that little harm will be done by simplifying matters and ranking the performances for differentchoices based only on the lowest point of each curve. In other words, for practical purposes, it is sufficiently reasonable to declare anvalue to be optimal for a specificpair if We emphasize that such a simplification is possible only because of the nice relative positions of the many curves and that the same simplification may not be applicable to other comparison situations.

The optimal number of colors, as defined through comparisons at the lowest points of the curves, is given by Table 1, for various success rate requirementsand a wide range ofvalues. Thevalue that attains the minimum possiblevalue is listed under each optimal.

4.4. Some Observation

It can be seen from Table 1 that the optimalvalue becomes larger as the success rate requirement is increased and also as the number of bits per table entry becomes larger. The same trend could be observed from an analogous table presented by [18] for the nonperfect fuzzy rainbow tradeoff.

An intuitive explanation for the trend concerning the success rate can be given as follows. Since the online phase of the fuzzy rainbow tradeoff processes the precomputations tables in parallel, a highervalue, corresponding to higher segmentation of the precomputation matrix, allows for more immediate exit from the online phase upon an encounter with the correct answer. Early exits from the online phase are less common under low success rates, and the importance of highervalue increases as the success rate requirement is increased. As for the trend related to the change in, one could treat this as a natural scaling effect that accompanies the general increase in search space size associated with the increase in.

Let us now return to the definition of the adjusted tradeoff coefficient. A careful reading of [13] shows that they had ignored the term involvingfrom (50) in comparing the perfect rainbow tradeoff with the other major tradeoff algorithms. This was justified in their work based on the observation that the term remains upper bounded by a small number, when parameters are restricted to those that do not call for impractically large precomputation efforts.

Applying the same argument to the perfect fuzzy rainbow tradeoff, we can see that theterm of (49) is likewise upper bounded for parameters that are reasonable in view of precomputation cost. Hence, we could consider the possibility of using the adjusted tradeoff coefficient defined as This definition could be favorable to the previous definition (49), in view of simplicity.

The effect of removing theterm can be seen from Figure 2. The curves forandare noticeably different from each other, and the use of the simpler (53) in place of the more accurate (49) cannot be justified.

An overview of Table 1 reveals that any reasonable choice of parameters will mostly satisfy the bound, and this bound implies that the termis always somewhat small. However, referring once more to Table 1, we see thatvalues of interest are not very large. Furthermore, practical values ofare not very large either. Hence, unlike the situation with the perfect rainbow tradeoff, the bound onis not small enough, in comparison to the terms it is added together with, to be completely ignored.

Our final comment is intimately connected to the above discussion. Combining the boundwith Proposition 2 and (23), we can state the bound concerning the precomputation coefficient. Hence, unless the success rate requirement is set unrealistically close to, precomputation cost will be automatically bounded by   for all meaningful parameters.

4.5. Algorithm Comparison

We are finally ready to compare the performance of the perfect fuzzy rainbow tradeoff with those of the perfect rainbow tradeoff and the nonperfect fuzzy rainbow tradeoff.

The comparison at thesuccess rate is given by Figure 3, which presents the-curves for the three tradeoff algorithms. The left-hand side box was drawn with parameters that would be used with a very small search space and the right-hand side box represents the situation of a very large search space. Thevalues for the perfect fuzzy rainbow tradeoffs were chosen to be the optimal ones given by Table 1. The corresponding table from [18] was used to decide on thevalues for the nonperfect fuzzy rainbow tradeoff.

The comparisons atandsuccess rates are given by Figures 4 and 5. As before, parameters typically considered during theoretical analyses of the tradeoff algorithms corresponding to a small search space and a large search space were used.

In every comparison box, the curve for the perfect fuzzy rainbow tradeoff appears much closer to the lower left corner than the points for the perfect rainbow tradeoff. The perfect fuzzy rainbow tradeoff can attain better online efficiency than the perfect rainbow tradeoff for the same online cost and attain equal online efficiency at lower precomputation cost. Furthermore, the perfect fuzzy rainbow tradeoff can attain online efficiency that is not possible with the perfect rainbow tradeoff.

Similar statements are true concerning the comparison between the perfect and nonperfect fuzzy rainbow tradeoffs, except that, at the lower end of the possible precomputation cost range, the two curves cross over each other, and the nonperfect fuzzy rainbow tradeoff can provide better online efficiency for the same precomputation effort. However, this is the region where the online efficiency is extremely bad, so that these parameters would be of limited practical interest.

In all, we can state that the perfect fuzzy rainbow tradeoff displays better performance than both the perfect rainbow tradeoff and the nonperfect fuzzy rainbow tradeoff, over a wide range of tradeoff algorithm application situations.

5. Conclusion

The execution behavior of the perfect table version of the fuzzy rainbow tradeoff algorithm was analyzed in this paper. The average case online computational complexity that fully accounts for the effects of false alarms was accurately obtained. The expected number of precomputation table entries and the number of bits required to record each table entry were also obtained accurately.

The results of our complexity analysis were used to compare the performance of the perfect fuzzy rainbow tradeoff with those of other tradeoff algorithms. The perfect rainbow tradeoff was recently argued by [13] to be advantageous over all other widely known tradeoff algorithms, and our recent work [9] had shown that the less widely known nonperfect fuzzy rainbow tradeoff outperforms the perfect rainbow tradeoff under certain circumstances. Hence, our comparison targets were set to the perfect rainbow tradeoff and the nonperfect fuzzy rainbow tradeoff.

The comparison took the following aspects of the tradeoff algorithms into account: success rate of inversion, precomputation complexity, computational complexity of the online phase, and physical storage size of the precomputation table. The comparison called for a carefully designed rule that correlated the number of bits per table entry to be used by each algorithm in a fair manner. The current work is the first to include the effects arising from the merge removal process of the perfect table creation into this rule.

We were able to conclude that the perfect fuzzy rainbow tradeoff is highly preferable to the perfect rainbow tradeoff. The perfect fuzzy rainbow tradeoff was also found to be superior to the nonperfect fuzzy rainbow tradeoff, except possibly at parameters that would have both of the algorithms performing very poorly during the online phase in exchange for a very small advantage in precomputation cost.

Appendix

Experimental Results

This section presents the results of four separate tests that support the theoretical findings of this paper. The first test shows the level of accuracy of the approximation claimed by Lemma 1. The subsequent two tests verify the correctness of two of our logical arguments that lie hidden within the proofs of technical lemmas. The final test verifies that our claim of online computational complexity is correct and serves as an overall checkup of our theory.

The one-way function for the tests was taken to be the key to ciphertext mapping, under a randomly generated fixed plaintext, of AES-128. Independence between multiple tests was acquired through distinct randomly generated plaintexts. Truncations of 128-bit ciphertexts to binary strings of a certain fixed length and zero-padded extensions of these to 128-bit keys were used to bring the search space to a manageable size. The parameterwas always taken to be an integer power of, and the distinguishing property was set to check whetherleast significant bits were zero. The reduction functions were set to constant XOR-ing operations.

Let us first check the accuracy of Lemma 1 through experiments. Recall that Lemma 1 is equivalent to (4) and that (4) was taken from our previous work [9], which dealt with the nonperfect fuzzy rainbow tradeoff. Note that the accuracy of (4) was already confirmed through tests in [18], even for smallvalues. However, the parameter sets used there were such that thevalues wereand, and, according to (4), which is at least approximately correct, these correspond tovalues ofand. The results of Section 4.3 imply that our current interest should be with parameter sets belonging to the rough range. Hence, we cannot rely on the previous test results to claim that the accuracy of Lemma 1 is sufficient for use with the perfect table case analysis.

One can infer from a careful review of how [9] obtained the approximate closed-form formula (4) from the iterative formula (3) that the inaccuracy of (4) is likely to increase asis made smaller and also asis made to approach. We experimented with multiple parameter sets for which thepair was close to one of those appearing in Table 1, that is, those that correspond to optimal online efficiencies for some success rate requirement. We discovered that the inaccuracy was greater with optimal parameter sets for the lower success rates and that the level of (in)accuracy was rather stable among parameter sets for the same success rate. We could also confirm that the accuracy increased whenwas made smaller under a fixedvalue.

Two of the test results are given by Figure 6. After choosing parameters,,, and  , we computedthrough Lemma 1 and generated the fuzzy rainbow matrix fromstarting points. A total of ten fuzzy rainbow matrices were generated for each parameter set, and the number ofth color boundary points was recorded and averaged separately for each. A small number of chains did not reach a DP within our chain length bound of, at various colors, and we discarded these without replacing them with newly generated chains. The lines of Figure 6 represent the theory given by Lemma 1 and the dots correspond to the experimental data. Each dot gives the count of theth color boundary points, averaged over ten tests.

The test results for the worst parameter set we had experimented with are given in the two left-hand side boxes of Figure 6. This is the situation where our theory is least accurate, but even in this case, the bottom box shows that the largest inaccuracy of (4) is approximately 5%. The right-hand side boxes present test results for a parameter set whosepair does not necessarily correspond to optimal online efficiency for any success rate. Since precomputation cost considerations will make parameter sets of suboptimal online efficiency more practical, the right-hand side boxes present the situation one would be experiencing in practice. The test results match our theory reasonably well, although not perfectly. We may conclude that Lemma 1 predicts the number ofth color boundary points sufficiently accurately for use in practice.

We wish to remark that our test results are in almost perfect agreement with what can be computed through the iterative formula (3), which (4) is supposed to approximate. Sincevalues of interest are of manageable sizes, one could always revert to using the iterative formula (3) should there be the need for higher accuracy.

Let us now explain our second experiment. One argument that was crucial during our analysis was that the selection process of DP subchainsfrom among the DP subchainsis not correlated with the lengths of the DP subchains in. This claim is equivalent to the equation that appeared during the proof of Lemma 3. This argument also allowed us to claim during the proof to Lemma 8 thatiterations of the one-way function are necessary to regenerate a precomputation chain up to theth color. This is a new argument that had not been used in any of the previous works, and it would be reasonable to verify this claim experimentally.

Note that Lemma 3 allows us to expect these values to be oforder, implying that there are more discarded chains within each DP submatrix for smallervalues. Hence, in testing (A.1), to increase the chances of discovering possible errors, one would not only choose parameters for whichis large but also choose to use smallvalues. Since the formula of Lemma 3, given in terms of notation (9), is guaranteed to be accurate only forvalues of interest, which are not very small, it would be more appropriate to test our argument directly through (A.1), rather than through the formula of Lemma 3.

Test results for the two smallandvalues are given in Table 2. The other parametersandwere chosen so that the corresponding experimentally obtainedvalues would be large, corresponding to a largeratio, while still falling within our range of interest. Tests for each of the two parameter sets were repeated ten times. All figures displayed by Table 2, other than the parameters, including theandvalues, are averages taken over the ten repetitions. In particular, the stated values are averages and not the simple ratios of the two averages appearing above these values. The experimental data strongly supports the correctness of (A.1).

Our third experiment aimed to test how accurate (35) was in presenting the cost of resolving a possible alarm associated with an online chain that is generated from theth color. This was of particular interest because the formula depended on Lemma 9, which was stated as an approximation. The experimental verification of (35) would also increase our confidence in the extremely delicate random function arguments we gave during the proof of Lemma 8.

The results of our tests that were carried out with two separate sets of parameters are given in Table 3. Each set of data involves 50 precomputation tables created from the specified-many starting points. For each precomputation table and each fixed starting color, we generated sufficiently many online chains so as to observe approximately 2000 merges. Our test results are in good agreement with our theoretical predictions given by (35).

We have added another row of theoretical predictions in Table 3. Tracing back through the proofs of Lemmas 7, 8, and 9 and referring to Lemmas 1 and 3, one can see that (35) is a simplified expression for Since Lemma 1 is a closed-form formula that approximates the iterative formula (3), our theoretical predictions of the alarm cost can be made more accurate through (A.2) and (3), although the required calculations can be slightly uncomfortable due to the iterative nature of (3). Indeed, we can see in Table 3 that the theoretical predictions made through (A.2) and (3) are even closer to the test results than the predictions of the closed-form formula (35).

Finally, we present an experimental verification of Theorem 11, which is essentially equivalent to the claim that (38) gives the online computational complexity of the perfect fuzzy rainbow tradeoff. Since this test was meant to be an overall sanity check of our theory, we used realisticvalues.

For each choice of parameters,, , and , we computed through Lemma 1 and generated a full set of    precomputation tables, each fromstarting points. After the completion of each precomputation phase, we generatedrandom inversion targets and performed the online phase. Test results corresponding to three separate parameter sets are displayed in Table 4. During both the precomputation and the online phases, we discarded any chain that reached the length ofwithout a DP within any of its subchains. The computational effort associated with these discarded chains is included in the test online complexities.

Let us take a closer look at the first parameter set. Noting that, we truncated each ending point to the length ofbits, allowing forextra bits of information. A small fraction of the ending points were made identical to each other, and we were careful to process all matching truncated ending points during the online phase. An-bit index is reasonable for thesituation, and its application to our truncated 26-bit ending points would allow each ending point to be stored inbits. However, we did not do so in favor of easier implementation and simply allocated 3 bytes and 4 bytes to the starting and ending points, respectively. Since, we are dealing with thesituation, and we can check through Table 1 that Test-1 used a parameter set for thesuccess rate that is close to optimal in view of the online phase.

Our Test-2 implementation recordedbits of each ending point and this would correspond to, when a-bit index is used. Intending to reach asuccess rate, we used, which is close to the optimal value for this situation. However, theandparameters were chosen so that thevalue is smaller than what is optimal for the online phase. This parameter set should be more practical in view of lower precomputation cost.

Parameters for Test-3 were likewise chosen to be realistic rather than to be optimal for the online phase. One difference with the parameter set for Test-2 is that we truncated the ending points to a slightly longer length. We could verify that our prediction of the cost of resolving alarms was more accurate for this case than Test-1 and Test-2.

The theory and experimental figures for the three tests are in good agreement. In reality, as can be expected from Figure 6, the averagevalues from the tests are slightly larger than our predictions and this brings about a success rate that is higher than expected. The higher success rate lowers the cost of generating online chains, while application of the ending point truncation technique raises the cost of resolving false alarms. The combination of the two opposite effects is what we are seeing in Table 4. We have verified through separate computations that replacing Lemma 1 with its iterative counterpart (3) produces even better predictions of at least the costs of generating the online chains. Hence, the small discrepancies between theory and test found in Table 4 are due to the accuracy limitations of Lemma 1 rather than to any oversights in our theoretical arguments.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (NRF-2012R1A1B4003379).