Security and Communication Networks

Security and Communication Networks / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 4898612 | 14 pages | https://doi.org/10.1155/2020/4898612

Automatic Search for the Linear (Hull) Characteristics of ARX Ciphers: Applied to SPECK, SPARX, Chaskey, and CHAM-64

Academic Editor: David Megias
Received09 May 2019
Revised30 Aug 2019
Accepted13 Nov 2019
Published21 Jan 2020

Abstract

Linear cryptanalysis is an important evaluation method for cryptographic primitives against key recovery attack. In this paper, we revisit the Walsh transformation for linear correlation calculation of modular addition, and an efficient algorithm is proposed to construct the input-output mask space of specified correlation weight. By filtering out the impossible large correlation weights in the first round, the search space of the first round can be substantially reduced. We introduce a concept of combinational linear approximation table (cLAT) for modular addition with two inputs. When one input mask is fixed, another input mask and the output mask can be obtained by the Splitting-Lookup-Recombination approach. We first split the n-bit fixed input mask into several subvectors and then find the corresponding bits of other masks, and in the recombination phase, pruning conditions can be used. By this approach, a large number of search branches in the middle rounds can be pruned. With the combination of the optimization strategies and the branch-and-bound search algorithm, we can improve the search efficiency for linear characteristics on ARX ciphers. The linear hulls for SPECK32/48/64 with a higher average linear potential () than existing results have been obtained. For SPARX variants, an 11-round linear trail and a 10-round linear hull have been found for SPARX-64 and a 10-round linear trail and a 9-round linear hull are obtained for SPARX-128. For Chaskey, a 5-round linear trail with a correlation of has been obtained. For CHAM-64, 34/35-round optimal linear characteristics with a correlation of are found.

1. Introduction

The three components: modular addition, rotation, and XOR, constitute the basic operations in ARX cryptographic primitives [1]. In ARX ciphers, modular additions provide nonlinearity diffusion with efficient software implementation and low dependencies on computing resources. Compared with S-box-based ciphers, ARX ciphers do not need to store S-box in advance, which can reduce the occupation of storage resources, especially in resource-constrained devices. In addition, ARX ciphers do not need to query S-boxes in the encryption and decryption process, which can reduce a lot of query operations. Therefore, ARX construction is preferred by many designers of lightweight ciphers. At present, there are many primitives used in this construction, such as HIGHT [2], SPECK [3], LEA [4], Chaskey [5], SPARX [6], and CHAM [7].

Until now, cryptanalysis on ARX ciphers is still not well understood as S-box-based ciphers, and the security analysis on them are relatively lagging behind [8]. Linear cryptanalysis is very important for evaluating the security margin of symmetric cryptographic primitives [9, 10]. The linear approximation tables of S-box-based ciphers mostly can be constructed and stored directly; however, the full linear approximation table of modular addition will be too large to store when the word length of modular addition is large.

For linear cryptanalysis of ARX ciphers, one crucial step is to calculate the linear correlation of modular addition. In [1114], the linear properties of modular addition have been carefully studied. In [13], a method to calculate the linear correlation of modular addition recursively was proposed, but the calculation process that was based on state transition in bit level leads to high complexity. Based on this method, only the optimal linear characteristics for the variants of SPECK32 [15] and SPECK32/48 [16] were found.

In 2013, Schulte-Geers used CCZ equivalence to improve the explicit formula for the calculation of linear correlation of modular addition [17]. Based on the improved formula and SAT solver model, Liu et al. obtained better linear characteristics for SPECK [18], the optimal linear trails for SPECK32/48/64 with correlation close to the security boundary () were obtained, and the 9/10-round linear hull with a potential of for SPECK32 was obtained.

According to the position of the starting round of the search algorithm, there are currently 3 types of automatic search technologies for linear/differential cryptanalysis on ARX primitives. They are bottom-up techniques [15], top-down techniques [1921], and the method of extending from the middle to the ends [22]. In these methods, the linear correlations are directly calculated based on the input-output masks or by looking up the precomputed partial linear approximation table (pLAT) [23]. For addition modulo with two inputs, the correlations need to be calculated based on the known input-output masks. However, in the search process for linear characteristics of ARX ciphers, due to the existence of three-forked branches, in most case, for the input-output masks of modular addition, only one input mask is determined, and another input mask and the output mask u are unknown. Although all space of can be traversed in a trivial way, it is very time-consuming.

High efficiency query operations can be achieved by constructing a linear approximation table of reasonable storage size. The pLAT can store the input-output masks whose linear correlation is greater than a certain threshold [15]. When the branches cannot be queried in pLAT and that need to be calculated by the input-output masks, the calculation process will lead to a significant reduction in search efficiency. Although the heuristic method can speed up the search, it cannot guarantee the results will be the best [24].

Therefore, constructing a search model based on the precise correlation calculation formula and realizing an efficient search for linear characteristics on ARX ciphers are still a study worth working on. The motivation of this paper is to investigate how to speed up the search algorithm in order to realize the search for linear (hull) characteristics on typical ARX ciphers.

1.1. Our Contributions

In this paper, we first revisit the linear correlation calculation of modular addition and introduce an algorithm to construct the input-output masks of specific correlation weight. Then, we propose a novel concept of combinational linear approximation table (cLAT) and introduce an algorithm to generate the lookup tables. Combining with these two optimization algorithms, we propose an automatic algorithm to search for the optimal linear characteristics on ARX ciphers. In the first round, we can exclude the search space of the nonoptimal linear trails by increasing the correlation weight of each modular addition monotonically. In the middle rounds, the undetermined masks and the corresponding correlation weight of each modular addition can be obtained by querying the cLAT, and a large number of nonoptimal branches can be filtered out during the recombination phase. Also, the algorithm can be appropriately modified for the heuristic search.

As applications, for SPECK32/48/64, the 9/11/14-round linear hulls are obtained. For SPARX-64, the 11-round linear trail with a correlation of and a 10-round linear hull with an of are found. For SPARX-128, we can experimentally get the optimal linear trails of the first eight rounds, and we get a 10-round linear trail with a correlation of . For Chaskey, the linear characteristics covering more rounds are updated, and a 5-round linear trail with a correlation of is found. For CHAM-64, we find a new 34-round optimal linear trail with a correlation of . A summary table is shown in Table 1.


VariantsRoundReference

SPECK329N/AN/A[18]
9N/AN/A[16]
99 s25 sThis paper

SPECK4810N/AN/A[18]
10N/AN/A[16]
103.2 h157.3 hThis paper

SPECK6413N/AN/A[18]
13N/AN/A[16]
138.6 h7.3 hThis paper
1425.6 h5.8 hThis paper

SPARX-64103 d1 hThis paper
115 mThis paper

SPARX-128927 m6 hThis paper
104.4 dThis paper

Chaskey3N/AN/A[18]
415.7 mThis paper
56.6 hThis paper

CHAM-6434N/AN/A[7]
341.1 dThis paper
354.8 dThis paper

1.2. Roadmap

This paper is organized as follows. We first present some preliminaries used in this paper in Section 2. In Section 3, we introduce the algorithm for constructing the space of input-output mask tuples, the algorithm for constructing cLAT, and the improved automatic search algorithm for linear cryptanalysis on ARX ciphers. In Section 4, we apply the new tool to several typical ARX ciphers. Finally, we conclude our work in Section 5.

2. Preliminaries

2.1. Notation

For addition modulo , i.e., , we use the symbols and to indicate rotation to the left and right and and to indicate the left and right shift operation, respectively. The binary operator symbols , , , , and represent XOR, OR, AND, concatenation, and bitwise NOT, respectively. For a vector x, represents its Hamming weight and is the bit of it. is a zero vector.

2.2. Linear Correlation Calculation for Modular Addition

Let be the n dimensional vector space over binary field ; for Boolean function and , , the linear correlation between f and h can be denoted by

For modular addition , let () be the input masks, u be the output mask, and be the standard inner product. According to the definition of linear correlation, when , the linear approximation probability is defined as

Let , then the linear correlation of modular addition can be denoted by Walsh transformation, and thus,

Let , where ε is the bias. When , the linear approximation probability is . The linear correlation can be denoted by

We call as the correlation weight, and the linear square correlation can be denoted by

For addition modulo , it can be rewritten as , in which and for . The first-order approximation is . If all for , and , the high-order approximation is

In [13], Wallén introduced the theorem to calculate the linear correlation by analyzing the carry high-order approximation function recursively. In [12], based on the bit state transformation, the formula to calculate the correlation was given by the following theorem.

Theorem 1 (See [12]). For addition modulo , let be the input masks and u be the output mask. Define an auxiliary vector , and each is an octal word, . Then, the linear correlation can be denoted bywhere the row vector , the column vector , and each matrix is defined byIn [17], Schulte-Geers extended Theorem 1 and derived a fully explicit formula for the linear correlation calculation, given by Theorem 2.

Theorem 2 (See [17]). For addition modulo with input-output mask tuple , a vectorial Boolean function denotes the partial sum mapping:Let , then the linear correlation can be denoted bywhere is an indicator function for graph ; for n-bit vectors a and b, represents for .
In iterative ciphers, the correlation of a single r-round linear trail is the product of the correlations of each round [25]. Assuming that there are additions modulo with two inputs in round, and are the input and output masks of the r-round linear trail, and the correlation of it can be denoted byThe linear approximation of a linear hull represents the potential of all linear trails with same input-output masks [26]. The averaged linear potential () can be counted by the following formula:Assuming that the key k is selected uniformly from the key space K, the statistics of can be formulated as (13), where is the number of trails with correlation weight of . Let be the correlation weight of the linear trail whose input-output masks are chosen as the fixed input-output masks of the linear hull. is the upper bound to be searched, which should be chosen by the trade-off between the search time and the accuracy of :

2.3. Linear Properties of SPECK, SPARX, Chaskey, and CHAM

The SPECK family ciphers were designed by NSA in 2013 [3]. The SPARX family ciphers were introduced by Dinu et al. at ASIACRYPT′16 [6]. In SPARX, the nonlinear ARX-box (SPECKEY) is obtained by modifying the round function of SPECK32. The linear mask propagation properties of the round function in SPECK and SPECKEY are shown in Figure 1. The rotation parameters for SPECK32, while for other variants.

If the input-output masks () and () of the modular additions in the two consecutive rounds of SPECK are known, the input and output masks of these two rounds can be denoted by Property 1.

Property 1. If () and () are given, then , , , , , and .
The linear layer functions [6] for SPARX-64 and SPARX-128 are shown in Figure 2. Due to the existence of the three-forked branches, the masks of the linear transformation layer have the following properties.

Property 2. For SPARX-64, if the masks are transformed by the linear layer function , let , , then , , , and .

Property 3. For SPARX-128, if the masks are transformed by the linear layer function , let , , then , , , , , , , and .
Chaskey is a MAC algorithm introduced by Mouha et al. at SAC′14 [5], and an enhanced variant was proposed in 2015 [27], which increases the number of permutation rounds from 8 to 12. The round function of the permutation is shown in Figure 3. The 4 modular additions are labeled by , respectively. The input mask and the output mask of the first round can be denoted by Property 4.

Property 4. For the permutation of Chaskey, if the input-output masks of each modular addition in the first round are (), , the corresponding correlation weight of each modular addition is , respectively. Hence, in the first round, , , , , , , , and . The corresponding correlation weight of the round function is .
CHAM is a family of lightweight block ciphers that was proposed by Koo et al. at ICISC′17, which blends the good designs of SIMON and SPECK [7]. 3 variants of CHAM have two kinds of block size, i.e., CHAM-64 and CHAM-128. The linear mask propagation for the 4 consecutive rounds of CHAM is shown in Figure 4. If the input-output mask tuples of each modular addition of the first 4 rounds are given, the input and output masks of the first 4 rounds can be deduced by Property 5.

Property 5. For CHAM, if the input-output mask tuples () of each modular addition of the first 4 rounds are given, , the input and output masks of the first 4 rounds can be deduced as follows. , , , ; , , , ; , , , ; , , , ; , , , .

3. Automatic Search for the Linear Characteristics on ARX Ciphers

3.1. Input-Output Masks of Specific Correlation Weight

The number of input-output mask tuples in the first round is closely related to the complexity of the branch-and-bound search algorithm, but traversing all possible input masks of the first round will result in high complexity. An alternative approach is to consider the possible correlation weight corresponding to the input-output masks and exclude those tuples that have a large correlation weight. However, for a fixed correlation weight, it may correspond to multiple input-output mask tuples although the correlation can be calculated by Theorem 2 when the input-output masks are fixed for a modular addition.

For addition modulo , its maximum correlation weight is , and the size of the total space S of all input-output mask tuples is . We can rank the correlation weights from 0 to and construct the input-output masks subspace corresponding to correlation weight , . Therefore, the total space S can be divided into n subspaces, i.e., .

Definition 1. Let be the input-output masks for a modular addition with nonzero correlation. Let us define an octal word sequence , where , for .

Definition 2. Let us define three sets that may belong to, i.e., , , and .
In Theorem 2, when the correlation of a modular addition is nonzero, the value distribution of the 3 consecutive bits in and the 3 consecutive words in Φ has the following relationships, shown in Observation 1.

Observation 1. Let and for , ; hence, . For , assuming when for or , they should have , on bit level, and it is equivalent to and . Since when and when ,Hence, the value of depends on whether the bit positions of and are active. The last significant bits () of the input-output masks construct the value of , which is only related to the Hamming weight of , i.e., and . Therefore, if we get the Hamming weight distribution of , from the LSB to MSB direction, as is determined, can be obtained. Next, is determined, and and should be satisfied; hence, the possible values of can be obtained. Recursively, all values can be constructed as an octal word sequence from the LSB to MSB direction to subject to the above observation. Hence, the tuples of () can be generated from the elements in Φ. The process to construct the subspace is shown in Algorithm 1, marked as Const ().

Input: and . Each pattern of the Hamming weight distribution of can be calculated by the combination algorithm in [28], which is the combination pattern of , where for .
(1)Func_LSB: //constructing the LSBs of
(2)if then
(3) Output the tuple of () with (1, 1, 1) or (0, 0, 0);
(4)end if
(5)if then
(6) For each , , and , call Func_Middle ();
(7)else
(8) For each , , and , call Func_Middle ();
(9)end if
(10)Func_Middle (): //constructing the middle bits of
(11)if then
(12) call Func_MSB ();
(13)end if
(14)if then
(15)  if then // recorded whether the value of is 1 or not.
(16)   For each and , call Func_Middle ();
(17)  else
(18)   For each and , call Func_Middle ();
(19)  end if
(20)else //. The value of determines whether belongs to or .
(21)if then
(22)   For each and , call Func_Middle ();
(23)  else
(24)   For each and , call Func_Middle ();
(25)  end if
(26)end if
(27)Func_MSB (): //constructing the bits of with position higher than .
(28)if then //the value of determines whether equals to 0 or 7.
(29)  if then
(30)   Let and , call Func_MSB ();
(31)  else
(32)   Let and , call Func_MSB ();
(33)  end if
(34)else //.
(35)  if then
(36)   For each and , output each tuple of ();
(37)  else
(38)   For each and , output each tuple of ();
(39)end if
(40)end if
3.2. Combinational Linear Approximation Table

For addition modulo , the full LAT requires a storage size of ; when n is too large, it will be very difficult to store. To facilitate the storage, an intuitive approach is to store only a part of the full LAT. For a n-bit vector, we can split it into t subvectors of m bits, where . When each m-bit subvector is determined, the n-bit vector can be obtained by concatenating. This idea gives birth to the concept of combinatorial LAT (cLAT).

Property 6. For and , they are equivalent to and .

Corollary 1. Let be the input-output masks of the modular addition with nonzero correlation, and let , , , and . Splitting the vectors , , , and into t subvectors, respectively, , , . Then, the correlation weight of the modular addition can be denoted bywhen

Proof. is the sum of the Hamming weight of each subvector , so . For , the bit in can be denoted by . Let , , and the bit in should be . Hence, , when and are satisfied, i.e., and for .

If the m-bit subvector adjacent to is known, can be calculated by subvector tuple (, , ) and the lowest bit of . We call (, , ) as a subblock, and we call the bit as the connection status when used in the calculation of . Splitting the n-bit vector into t subvectors, there should have connection status , , and for the highest subvector, its connection status . Hence, for the highest subblock (, , and ), the Hamming weight of and the bit can be obtained, recursively, and the Hamming weight of the remaining subvectors can also be obtained. Therefore, as connection status and , we can construct a m-bit lookup table for modular addition in advance and query the tables by indexing input-output masks and the connection status. In addition, the connection status for the next subblock can also be generated.

In the top-down search techniques for the ARX ciphers, for the modular additions in the middle rounds, in most cases, only one input mask is fixed (assuming it is ), and another input mask and the output mask u are unknown. In the lookup tables, we need to lookup all valid subvectors of () that correspond to nonzero correlation based on . The lookup table (called as cLAT) is constructed by Algorithm 2, and it takes about 4 seconds on a 2.5 GHz CPU to generate the table with storage size about 1.2 GByte when .

(1)for each and input mask
(2)   , let and , for ;
(3)   for each input mask and output mask do
(4)    , , , ;
(5)    for to do
(6)      ;
(7)    end for
(8)    if then //determining the connection status generated by the upper subblock.
(9)      , , ;
(10)    else
(11)      , ;
(12)    end if
(13)    for to do //determining the correlation weight.
(14)      ;
(15)      if then
(16)        , ;
(17)      end if
(18)    end for
(19)    , ;
(20)    if and then //judgment conditions and .
(21)     ;
(22)     ;
(23)     ; //the number of tuples correspond to and b.
(24)     ; //connection status.
(25)     if then
(26)        ; //the minimum correlation weight corresponds to and b.
(27)       end if
(28)     end if
(29)    end for
(30)end for.
3.3. Splitting-Lookup-Recombination

Algorithm 2 constructs a m-bit cLAT for the addition modulo , and this section describes how to use it. When one input mask is fixed, we can get another input mask, the output mask, and the corresponding correlation weight by the Splitting-Lookup-Recombination approach, which contains three steps.

3.3.1. Splitting

For addition modulo , , if one of the two input masks is fixed, then split into tm-bit subvectors. The larger the m, the fewer the times to lookup cLAT and the fewer the number of bit concatenation operations, but the more space the memory takes up, and after the trade-offs, we choose .

3.3.2. Lookup

From the MSB to the LSB direction, query the subvectors of () that correspond to each subvector of and the corresponding correlation weights. For the highest m-bit subvector , its connection status , looking up cLAT to get and , the corresponding correlation weight , and the connection status for the subvector . Similarly, other subvectors of u and and the corresponding correlation weights can be obtained.

3.3.3. Recombination

All subvectors of u and can be obtained by lookup tables, and the n-bit u and can be obtained by bit concatenation. The correlation weight of the modular addition is the sum of the weight of each subblock, i.e., .

When there are multiple modular additions in the round function, i.e., , for each modular addition, its undetermined input mask and output mask need to be obtained by the Splitting-Lookup-Recombination approach, respectively. In the lookup phase, a total of lookup operations are required. And the correlation weight of the round function is .

For each subvector , the possible minimum linear correlation weight corresponding to it can be calculated in advance by Algorithm 2, that is,

During the Recombination phase, the correlation boundary can be constructed by the associated weights that have been obtained and the possible minimum correlation weights, shown in Corollary 2.

Corollary 2. For addition modulo , one of the input mask is fixed, , , and . For any of nonzero correlation, the correlation boundary should have

Proof. The correlation of modular addition is the product of the correlation of each subblock after splitting, i.e., . Let be the correlation weight of the subvector tuples that are obtained by lookup tables. The sum of the correlation weights of the subvector tuples have not been looked up yet, which should s.t. .

Assuming the number of corresponding to each subvector is , hence, the number of mask branches corresponding to the modular addition is . Corollary 2 can be used to filter out of large correlation weight.

3.4. Improved Automatic Search Algorithm

In this section, we adopt a top-down technique [1921], taking the first round as the starting point of the search process. In the first/second rounds, the input-output mask tuples of each modular addition with correlation weight increasing monotonically can be obtained by Algorithm 1. In the middle rounds, for each modular addition, u and can be obtained by the Splitting-Lookup-Recombination approach. Algorithm 3 takes SPECK as an example.

Input: the cLAT is precomputed and stored by Algorithm 2. have been recorded
(1)Program entry:
(2)Let  = , and  = null // can be derived manually for most ARX ciphers.
(3)while  do
(4)    ; //the expected r-round correlation weight increases monotonously.
(5)    Call Procedure Round-1;
(6)end while
(7)Exit the program.
(8)Round-1:/exclude the search space with correlation weights larger than .
(9)for to  do // increases monotonously.
(10)    if then
(11)      Return to the upper procedure with FALSE state;
(12)    else
(13)      Call Algorithm 1Const (), and traverse each output tuple ;
(14)      if call Round- and the return value is TRUE, then
(15)        Stop Algorithm 1 and return TRUE; /record the optimal linear trail be found.
(16)     end if
(17)    end if
(18)end for
(19)Return to the upper procedure with FALSE state;
(20)Round-