Security and Communication Networks

Security and Communication Networks / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 9684239 | https://doi.org/10.1155/2019/9684239

Hojoong Park, Yongjin Yeom, Ju-Sung Kang, "A Lightweight BCH Code Corrector of TRNG with Measurable Dependence", Security and Communication Networks, vol. 2019, Article ID 9684239, 11 pages, 2019. https://doi.org/10.1155/2019/9684239

A Lightweight BCH Code Corrector of TRNG with Measurable Dependence

Academic Editor: Prosanta Gope
Received21 Jan 2019
Accepted16 Apr 2019
Published13 May 2019

Abstract

We propose a new lightweight BCH code corrector of the random number generator such that the bitwise dependence of the output value is controllable. The proposed corrector is applicable to a lightweight environment and the degree of dependence among the output bits of the corrector is adjustable depending on the bias of the input bits. Hitherto, most correctors using a linear code are studied on the direction of reducing the bias among the output bits, where the biased input bits are independent. On the other hand, the output bits of a linear code corrector are inherently not independent even though the input bits are independent. However, there are no results dealing with the independence of the output bits. The well-known von Neumann corrector has an inefficient compression rate and the length of output bits is nondeterministic. Since the heavy cryptographic algorithms are used in the NIST’s conditioning component to reduce the bias of input bits, it is not appropriate in a lightweight environment. Thus we have concentrated on the linear code corrector and obtained the lightweight BCH code corrector with measurable dependence among the output bits as well as the bias. Moreover, we provide some simulations to examine our results.

1. Introduction

Random number generator (RNG) is essential in the modern cryptography system and used to generate the security parameters such as secret key, initialization vector, nonce, salt, and so on. The random numbers used for cryptographic purposes should be generated by the cryptographically secure random number generator [1]. The cryptographically secure random number generator is composed of the true random number generator (TRNG) and the pseudorandom number generator (PRNG) [13] as shown in Figure 1. The nondeterministic outputs are generated by TRNG and are the root of security, also known as the entropy source, for the cryptographically secure random number generator. In PRNG process, the cryptographically secure random numbers are finally generated by a deterministic algorithm using the seed, the output of TRNG, as input value. Hence the security of the cryptographically secure random number generator depends on the nondeterministic entropy source and all its constituent parts [3]. In order to generate the cryptographically secure random number, we first collect the entropy from the noise source such as thermal noise, ring oscillator, noise from quantum effect, outputs of crypto API in operation system, and CPU jitter noise source [4, 5] in TRNG process.

It is difficult to directly utilize the noise source as a seed, the input value of PRNG, since the bias of noise source is able to be exploited to attack the cryptosystem by using the statistical estimation of the next TRNG output bits [6]. To solve this problem, various post-processing components in TRNG are used as shown in Figure 1. The principal role of the post-processing component is reducing the bias of the noise source in TRNG. There are well-known post-processing components such as von Neumann corrector [7], XOR corrector [8], NIST’s conditioning component [4], and correctors using linear code and nonlinear code (code corrector). Until now most researches related to the post-processing components have been conducted on the subject of reducing the bias of output bits [6, 912]. The outputs of von Neumann corrector [7] are perfectly unbiased if the input bits are independent. The bias of the output bits from XOR corrector [8] is proven to be , where the bias of input bits is , for any , and the input bits are independent. In the NIST’s conditioning component [4], the FIPS-approved or NIST-recommended cryptographic algorithms are used to reduce the bias of input values. The quality of the output bits from the NIST’s conditioning component is only empirically ensured by the pseudorandomness and the statistical randomness of the used cryptographic algorithms which are heavy to a lightweight environment. The linear code and nonlinear code are employed as the code corrector for reducing the bias of input bits. The advantage of the code corrector is that the bias of output value is theoretically determined by the bias of input bits and the used code.

Meanwhile, the results of research on the independence of the output bits of post-processing component are known as follows: The output bits of the von Neumann corrector and XOR corrector are proven to be bitwise independent when the input bits are independent and stationary [9]. We concede that the output bits of NIST’s conditioning component are heuristically independent since the outputs are generated by heavy cryptographic algorithms. On the other hand, the output bits of the code corrector are intrinsically not independent even though the input bits are independent; however, there are no results dealing with the independence of the output bits. Therefore we inspect the code corrector from the view point of independence in this paper.

Related Works. There are a number of works to design post-processing component using linear code and to study on the direction of reducing the bias of the output bits where the biased input bits are independent and stationary. As a noticeable result, Markovski et al. [10] have proposed a new linear corrector using quasigroup and have analyzed the output of linear corrector using KStest. Then, they have provided some simulation results and mentioned that the linear corrector is efficiently implemented in both hardware and software. Dichtl [11] has refuted the post-processing component of Markovski and proposed a new linear corrector using XOR compression function. He has analyzed the compression function for input bias in order to reduce the output bias. Lacharme [12] has proposed post-processing components using a resilient function and a cyclic code. He has calculated the upper bound of output bias using relation between resilient function and cyclic code and analyzed a bias and minimal entropy of output. Kim et al. [6] have designed a linear corrector overcoming the minimum distance limitation using quadratic residue code. They have analyzed outputs of the corrector using entropy test in AIS.31 standard and mutual information and have provided a hardware architecture. Kwok et al. [9] have compared code corrector and von Neumann corrector using compression rate, output’s bias, and adversary bias. They have also provided good ways to implement linear code corrector in hardware based on their analysis of linear code corrector.

Our Contribution. We propose a new lightweight BCH code corrector that bitwise dependence of the output value is adjustable. Since most code correctors have been studied on the subject of reducing the bias of the output bits where the biased input bits are independent, we concentrate on the dependence of the output bits and define a new measure, degree of dependence, in order to make the lightweight BCH code corrector with controllable dependence as shown in Figure 2. We obtain the fact that degree of dependence is related to the bias of input bits through theoretical and simulation results. Moreover, the BCH code corrector is applicable to the lightweight environment such as IoT, embedded, and mobile devices because of its small code size. In other words, the proposed BCH code corrector is applicable to the lightweight environment as post-processing component of TRNG, and the dependence among the output bits of this corrector is controllable depending on the bias of the input bits.

Organization. The rest of this paper is organized as follows. In Section 2, we introduce the post-processing components and their properties. In Section 3, we examine the output bits of the BCH code corrector from the view point of mutual independence. In Section 4, we formally define a new measure, degree of dependence, and compare another method measuring dependence. In Section 5, we exploit degree of dependence in order to measure the dependence of the output bits of BCH code corrector. In Section 6, we provide some experimental results for inspecting our results, and we propose the application of the BCH code corrector of TRNG in a lightweight environment. Finally, Section 7 is the conclusion of this paper.

2. Post-Processing Component

The post-processing component has been utilized in TRNG mainly for decreasing the bias of noise source. The basic role of post-processing component in TRNG is to make seed having high entropy rate, since the security of the cryptographically secure random number generator essentially depends on the entropy of the seed [4]. There are the representative post-processing components applied in TRNG such as von Neumann corrector [7], XOR corrector [8], NIST’s conditioning component [4], and code corrector. Some properties and limitations of the post-processing components are summarized in Table 1.


Properties Limitations

von Neumann corrector The output is unbiased and independent. The output length is nondeterministic since compression rate is 0.25 on average.

XOR corrector The post-processed data is independent and implementation is easy. Compression rate is fixed as 0.5. The bias of output is fixed as if the bias of input is .

NIST’s conditioning component The output bits of conditioning component are regarded as heuristically unbiased and independent. The output bits have not been theoretically proven from the viewpoint of unbias and independence.

Code corrector It is able to adjust the bias of output bits depending on used code and input bias. The output bits of the code corrector are not independent even though the input sequence is independent.

On the other hand, most post-processing components have been studied on the direction of reducing the bias of output bits. The von Neumann corrector generates perfectly unbiased and independent outputs when the input bits are independent. Let and be an input bit and output bit, respectively, and let and be a bias of input bit and a bias of output bit, respectively; then the bias of input bit is defined as . When two input bits are ‘01’, the von Neumann corrector outputs ‘0’ and when two input bits are ‘10’, it outputs ‘1’ in the von Neumann corrector. On the other hand, the other cases such as two bits of noise source are ‘00’ or ‘11’, and there is no output from the von Neumann corrector. Then, the probabilities of the output bit are calculated as due to independence of input bits. Since the bias of output bit is , the output bit of the von Neumann corrector is perfectly unbiased. However, the von Neumann corrector has some problems that not only the output length is nondeterministic but also the expected output length is one-fourth of the input length since the compression rate is 0.25 on average.

The XOR corrector has properties such that the output bits of XOR corrector are independent when the input bit is independent and stationary, and the XOR corrector is also easy to implement [9]. However, it is not flexible since the compression rate is fixed to one-twice and the fixed output’s bias is fixed to when input bias is [8]. In NIST’s conditioning component, FIPS-approved or NIST-recommended cryptographic algorithms such as Hash function in FIPS 180-4 [13] or in FIPS 202 [14], HMAC in FIPS 198-1 [15], CBC-MAC in SP 800-90B [4] and so on are used to reduce the bias of input bits. Hence the unbiased outputs are heuristically guaranteed by the pseudorandomness of outputs of the cryptographic algorithms. In addition, because it is difficult to theoretically analyze the conditioning component from the viewpoint of the output’s bias and independence, there is no theoretical result as other post-processing functions.

The linear codes such as dual code [6], BCH code [9], and cyclic code [12] are useful to the code corrector. Until now, the code corrector has been actively studied on the direction of reducing the bias of output bits [912], since the compression rate and bias of the output bits could be adjusted by the bias of input value and the characteristics of used code. On the other hand, the output bits of the code corrector are intrinsically not independent even though the input bits are bitwise independent; however, there are no results pertaining to the independence of the output bits. Hence we examine the code corrector from the viewpoint of independence in this paper.

3. Mutual Independence of BCH Code Corrector

In this section, we examine the BCH code corrector from the viewpoint of the mutual independence. In order to inspect the properties of BCH code corrector, we choose the BCH code, the smallest BCH code, and also take the same assumption as in [912]: the input bits are bitwise independent, biased, and stationary. We verify the fact that the output bits of BCH code corrector are not independent even though the input bits are bitwise independent.

Let the input bits of noise source be and let the post-processed bits be . The BCH code corrector is generally operated as the multiplication of a generator matrix and the input bits . In other words, the output bits of the BCH code corrector are defined as , and we are able to strictly describe them as

Note that denotes the transpose of , and the generate matrix is composed of the coefficient of generator polynomial [6, 9].

3.1. Output Distribution of 7, 4, 3] BCH Code Corrector

The BCH code corrector is generated by a generator polynomial of BCH code. Since the generator polynomial of BCH code is , therefore the generator matrix is generated as

Let the input bits be and let the output bits of the BCH code corrector be ; then we should represent the output bits as , where . Let be a random variable on the input bit, and let be the input bias; we are able to describe the distribution of input bit as and . Let be a random variable on the input bits which are operated by multiplication where the value of matrix entry is 1, and let be a random variable on the output bit. The distribution of output bit is represented as and . Since the input bits are independent by our assumptions, the distribution of output bit of the BCH code corrector is calculated as in Table 2. Table 2 is used to verify the properties of the BCH code corrector as well.


000 001 010 011 100 101 110 111 Sum

0 0 0 0

0 0 0 0

3.2. Mutual Independence of 7, 4, 3] BCH Code Corrector

We examine the BCH code corrector from the viewpoint of mutual independence. Table 2 is used to verify the dependence of output bits of the BCH code corrector in our inspection. In this subsection, we first describe the definition of mutual independence and verify that the output bits of the BCH code corrector are not mutually independent. Finally, we derive the conjecture that the output bits of the BCH code corrector are not mutually independent, even though the input bits are bitwise independent, due to the intrinsic property of the generator matrix of the BCH code.

Definition 1 (mutual independence [16]). Let the random variables on have the joint probability density distribution , where , and the marginal probability density distributions , respectively, for every and every subset of distinct values drawn from the set of the first natural numbers. Then, are mutually independent if and only if

Since the output length of BCH code corrector is 4 bits, we have to validate the independence of two bits, three bits, and four bits, respectively, in order to verify the mutual independence among the output bits.

Proposition 2. The two output bits of the 7, 4, 3] BCH code corrector are not independent.

Proof (Proof of Proposition 2). For and , let be a random variable on the -th bit of the input and let be a random variable on the -th bit of the output. Without loss of generality, we are able to describe the two bits for independence analysis as and , due to the fact that the input bits are independent by the assumption. Thus, is calculated asSimilarly, , , and are proven as . The results are as follows: Therefore, the output bits of the BCH code corrector are not pairwise independent.

Proposition 3. The three output bits of the 7, 4, 3] BCH code corrector are not mutually independent.

Proof (Proof of Proposition 3). Similarly, Proposition 3 is proven by using the same method as Proposition 2. In the proving of the mutual independence of three bits, we classify two cases: Case 1 is the joint distribution of , , and , and Case 2 is the joint distribution of . Due to the characteristic of the generator matrix and generator polynomial, Case 1 shares two bits in the post-processing operation; on the other hand, Case 2 shares only one bit in the post-processing operation. The proofs of Cases 1 and 2 are as follows.

Case 1 (joint distribution of , and ). For and , let be a random variable on the -th bit of the input and let be a random variable on the -th bit of the output. Without loss of generality, we are able to represent the three output bits for verifying the mutual independence as , , and , due to the fact that the input bits are bitwise independent by the assumption. Thus, is calculated as The similar arguments can be applied to , .

Case 2 (joint distribution of ). For and , let be a random variable on the -th bit of the input and let be a random variable on the -th bit of the output. We can represent the three bits for analysis as , , and , due to the fact that the input bits are independent by the assumption. Thus, is calculated as The similar arguments can be applied to , ). Therefore the three output bits of the BCH code corrector are not mutually independent.

Proposition 4. The four output bits of the 7, 4, 3] BCH code corrector are not mutually independent.

Proof (Proof of Proposition 4). For and , let be a random variable on the -th bit of the input and let be a random variable on the -th bit of the output. We represent the four bits for analysis as , , , and . We can prove Proposition 4 using the same method as Proposition 2. The following are the results of the mutual independence of four bits.The similar arguments can be applied to , , and we obtain the fact that the four output bits of the BCH code corrector are not mutually independent.

We find out the fact that the output bits of the BCH code corrector are not mutually independent even though the input bits are bitwise independent. We should also conjecture that the output bits of the BCH code corrector are not mutually independent due to the inherent characteristic of the generator polynomial and generator matrix such that there are sequential degrees in the generator polynomial of BCH code, or the rank of generator matrix. The following are the examples of the generator polynomials of BCH code [9, 17].(i)The generator polynomial of BCH code is .(ii)The generator polynomial of BCH code is .(iii)The generator polynomial of BCH code is .(iv)The generator polynomial of BCH code is .(v)The generator polynomial of BCH code is .(vi)The generator polynomial of BCH code is .

4. Degree of Dependence

In this section, we define a new measure degree of dependence which calculates the difference between the distribution of output bits of the BCH code corrector and the distribution where the output bits are independent. In order to formally define the degree of dependence of a given distribution, we first define which denotes the maximum difference between the -dimensional distributions of the given distribution and the distributions given by independent random variables, where the degree of dependence among bits is assessed.

Definition 5 (degree of dependence). Let be a given random vector on with the joint distribution . For any , denotes the maximum difference between the joint distribution and the distribution given by independent random variables. That is, is defined asand then the degree of dependence of the given distribution , , is denoted byOn the other hand, there is another method measuring the dependence such as -wise -dependent [18]. -wise -dependent is determined by using the distance between the joint uniform distribution of random variables and the joint distribution of random variables. It is defined as follows.

Definition 6 (-wise -dependent). Let be a given joint distribution on and let be the uniform distribution on . Then is said to be -wise -dependent if for any subset of the index set such that ,where the variation distance between two distributions and defined over the same probability space is denoted byand and are the distributions and , respectively, restricted to the subset .

Degree of dependence is similar to the concept of -wise -dependence. By Definition 6, -wise -dependence is measured as the sum of the difference between the joint uniform distribution of random variables and the given dimensional joint distributions. However, degree of dependence measures the maximum difference between the distribution of output bits of the BCH code corrector and the distribution where the output bits are independent. The distribution where the output bits are independent is not a fixed distribution such as the uniform distribution, since it just satisfies (3) in Definition 1. Hence the joint uniform distribution is not equal to the distribution where the output bits are independent. In order to show the difference between degree of dependence and -wise -dependence, we describe the following example.

Example. For 1 , 4, let be a random variable on the -th bit, and let be the -dimensional joint uniform distribution. That is, , for all . For example, , for all . Let be the joint distribution on given by Figure 3.

Then, the marginal distribution from the joint distribution is calculated as

By Definition 5, degree of dependence is calculated as , since

Therefore,

On the other hand, by Definition 6, is 4-wise -dependent and is calculated as

5. Degree of Dependence of the [, 4, 3] BCH Code Corrector

We verify the fact that the output bits of the BCH code corrector are not mutually independent in Section 3 and formally define degree of dependence of a given distribution in Section 4. In this section, we exploit degree of dependence in order to measure the difference between the distribution of output bits of the BCH code corrector and the distribution given by four independent random variables. In order to examine degree of dependence of the BCH code corrector, we assume that the input bits are bitwise independent, biased, and stationary as in Section 3. Figure 4 shows the assumption of our inspection and degree of dependence of the BCH code corrector.

For and , let the input bits be , for all , and let be the bias of input bits where the range of input bias is . Then the distribution of input bits is represented asLet be the distribution of output bits of the BCH code corrector, and let the output bits be for all . From Table 2, the output distribution of the BCH code corrector is described as

In order to examine degree of dependence of the BCH code corrector, we employ the results which are the output distribution of the BCH code corrector in Table 2, Propositions 2, 3, and 4, and our assumption. For , is calculated as the maximum difference between the -dimensional distribution of output bits of the BCH code corrector and the distribution where the bits among the outputs are independent. We obtain , , and by using (9) in Definition 5, and they can be represented as functions on for :

Based on these results, Figure 5 is depicted as the graphs of , , and for . Since is determined by (10), the maximum value of , , and , we can derive of the BCH code corrector asWhen is , and , , , and are described in Table 3. From Figure 5 and Table 3, we should conjecture that of the BCH code corrector is determined by and also derive the fact that is close to 0, when the bias of input bits is close to 0.




6. Experimental Results and Applications

6.1. Experimental Results

We have conducted experiments to support the theoretical results and examine degree of dependence depending on the bias of input bits. In order to verify our examination, we have obtained experimental data sequences from Numpy [19]. Numpy is the library of Python and provides various probability distributions such as Binomial distribution, Normal distribution, and Poisson distribution. We have collected the experimental data from the Bernoulli distribution with success probability of each trial by using Numpy, in order to make experimental data satisfying our assumptions that the input bits are biased, independent, and stationary. In this experiment, we have set that the success probability of the Bernoulli distribution is , and , and then we have collected experimental data; data size is 30 MB, 300 MB, and 3 GB, respectively. Note that denotes that the bias of input bits is 0.25, denotes that the bias of input bits is , and denotes that the bias of input bits is .

Table 4 shows the experimental results for , and where the input bias is , and , and the experimental data size is 30 MB, and Tables 5 and 6 show the results of the experimental data of size 300 MB and 3 GB, respectively. The reason for performing the experiments for different data sizes is to accurately inspect our theoretical result. Since the difference values between the theoretical result and experimental results in Tables 4, 5, and 6 are less than , , and , respectively, we are able to state that degree of dependence in Tables 4, 5, and 6 is similar to the theoretical results in Table 3. Specially, as the data size increases, and the accuracy of the experiment increases; thus, the values in Table 6 are more approximate to the theoretical results. Therefore the experimental results support our theoretical result for degree of dependence.










6.2. Application of the 7, 4, 3] BCH Code Corrector in TRNG

In order to harvest the entropy from the noise source, RNGs in the lightweight environment usually collect sensor-based noise sources such as microphone, accelerometer, magnetometer, gyroscope, temperature, and humidity [20, 21]. However, since these noise sources have low entropy [20, 21], RNGs have to apply post-processing component so as to reduce biases and to increase the entropy per bit.

It is possible to consider various post-processing components which are appropriate in the lightweight environment, except for NIST conditioning components where the heavy cryptographic algorithm is involved. Although the von Neumann corrector is lightweight, it is also not suitable from the security point of view due to some drawbacks such as adversary bias; on the contrary, the linear code correctors perform much better than von Neumann corrector [9]. The BCH code corrector is applicable to resource constrained environments, such as IoT, embedded, and mobile devices [22, 23], since the code size and the circuit complexity of the proposed code corrector are small as compared with other code correctors and NIST’s conditioning components.

It is also possible to iteratively use the BCH code corrector, when the size of the collected noise source is bigger than the input size of the BCH code corrector. For instance, if the size of the collected noise source is 32 bits, the BCH code corrector is four times iteratively processed with separated 7 input bits, and the residual 4 input bits are discarded or stored for the next generation. From our result which finds out the relation between the bias of input bits and degree of dependence among output bits, the BCH code corrector is controllable to degree of dependence corresponding to the bias of input bits. The entropy sources are able to be effectively managed by the BCH code corrector according to the environment of collecting the noise sources.

7. Conclusion

We have proposed a new lightweight BCH code corrector that the bitwise dependence of the output bits is controllable. We have focused on the dependence of the output bits and define a new measure degree of dependence for making the lightweight BCH code corrector with adjustable dependence corresponding to the bias of input bits. Note that most code correctors have been studied on the subject of reducing the bias of the output bits where the input bits are independent and biased. We have examined the BCH code corrector from the viewpoint of independence in order to utilize it in the lightweight environment. We have obtained the relation between degree of dependence and the bias of input bits through theoretical results and some simulations. Due to its simple structure with small code size and low circuit complexity, the proposed code corrector is able to be applied to lightweight environments such as IoT, embedded, and mobile devices. Moreover the proposed code corrector has the properties of measurable and controllable dependence among output bits as well as reducing the bias of input bits. We expect that the proposed BCH code corrector is utilized to efficiently manage the entropy source in the lightweight environment. In future works, we are planning to study on measuring the degree of dependence of various probability distributions from other post-processing components, and we will also study how to apply degree of dependence for evaluation of some cryptographically secure random bit sequences.

Data Availability

No data were used to support our study.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korean Government (MSIT) (NO.2014-6-00908, Research on the Security of Random Number Generators and Embedded Devices).

References

  1. ISO/IEC 18031, Information technology - Security techniques - Random bit generation, 2011.
  2. W. Killmann and W. Schindler, “A proposal for: Functionality classes and evlauation methodology for true (physical) random number generators,” BSI AIS.31, 2001. View at: Google Scholar
  3. E. Barker and J. Kelsey, Recommendation for Random Bit Generator(RBG) Construction, NIST Special Publication 800-90C, 2016.
  4. M. S. Turan, E. Barker, J. Kelsey, K. A. McKay, M. L. Baish, and M. Boyle, “Recommendation for the entropy sources used for random bit generation,” National Institute of Standards and Technology NIST SP 800-90B, 2018. View at: Publisher Site | Google Scholar
  5. A. Vassilev and T. A. Hall, “The importance of entropy to information security,” The Computer Journal, vol. 47, no. 2, pp. 78–81, 2014. View at: Publisher Site | Google Scholar
  6. Y.-S. Kim, J.-W. Jang, and D.-W. Lim, “Linear corrector overcoming minimum distance limitation for secure TRNG from (17, 9, 5) quadratic residue code,” ETRI Journal, vol. 32, no. 1, pp. 93–101, 2010. View at: Publisher Site | Google Scholar
  7. J. von Neumann, “Various techniques used in connection with random digits,” Applied Math Series, pp. 36–38, 1951. View at: Google Scholar
  8. R. B. Davies, Exclusive OR (XOR) and Hardware Random Number Generators, 2002, http://www.robertnz.net/pdf/xor2.pdf.
  9. S. Kwok, Y. Ee, G. Chew, K. Zheng, K. Khoo, and C. Tan, “A Comparison of Post-Processing Techniques for Biased Random Number Generators,” in Information Security Theory and Practice. Security and Privacy of Mobile Devices in Wireless Communication, vol. 6633 of Lecture Notes in Computer Science, pp. 175–190, Springer, Berlin, Germany, 2011. View at: Publisher Site | Google Scholar
  10. S. Markovski, D. Gligoroski, and L. Kocarev, “Unbiased Random Sequences from Quasigroup String Transformation,” in Proceedings of the International Workshop on Fast Software Encryption, pp. 163–180, Springer, Berlin, Germany. View at: Google Scholar
  11. M. Dicht, “Bad and Good Ways of Post-processing Biased Physical Random Numbers,” in Proceedings of the International Workshop on Fast Software Encryption, pp. 45–62, Springer, Berlin, Germany, 2007. View at: Google Scholar
  12. P. Lacharme, “Post-processing functions for a biased physical random number generator,” in Proceedings of the International Workshop on Fast Software Encryption, pp. 334–342, Springer, Berlin, Germany, 2008. View at: Google Scholar
  13. Federal Information Processing Standard 180-4, Secure Hash Standard (SHS), 2015.
  14. Federal Information Processing Standard 202, SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions, 2015.
  15. Federal Information Processing Standard 198-1, The Keyed-Hash Message Authentication Code (HMAC), 2008.
  16. R. V. Hogg and A. T. Craig, Introduction to Mathematical Statistics, Macmillan, 4th edition, 1978.
  17. R. Bose, Information Theory, Coding and Cryptography, McGrawHillEducation, 3rd edition, 2016.
  18. J. Naor and M. Naor, “Small-bias probability spaces: efficient constructions and applications,” SIAM Journal on Computing, vol. 22, no. 4, pp. 838–856, 1993. View at: Publisher Site | Google Scholar | MathSciNet
  19. NumPy, http://www.numpy.org.
  20. C. Hennebert, H. Hossayni, and C. Lauradoux, “Entropy harvesting from physical sensors,” in Proceedings of the 6th ACM Conference on Security and Privacy in Wireless and Mobile Networks, WiSec 2013, pp. 149–154, ACM, Hungary, April 2013. View at: Google Scholar
  21. K. Wallace, K. Moran, E. Novak, G. Zhou, and K. Sun, “Toward sensor-based random number generation for mobile and IoT devices,” IEEE Internet of Things Journal, vol. 3, no. 6, pp. 1189–1201, 2016. View at: Publisher Site | Google Scholar
  22. G. C. C. F. Pereira, R. C. A. Alves, F. L. da Silva, R. M. Azevedo, B. C. Albertini, and C. B. Margi, “Performance evaluation of cryptographic algorithms over IoT platforms and operating systems,” Security and Communication Networks, vol. 2017, Article ID 2046735, 16 pages, 2017. View at: Google Scholar
  23. M. Schramm, R. Dojen, and M. Heigl, “A vendor-neutral unified core for cryptographic operations in GF(p) and GF(2m) based on montgomery arithmetic,” Security and Communication Networks, vol. 2018, Article ID 4983404, 2018. View at: Publisher Site | Google Scholar

Copyright © 2019 Hojoong Park et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles