Security and Communication Networks

Security and Communication Networks / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 8214681 |

Édgar Salguero Dorokhin, Walter Fuertes, Edison Lascano, "On the Development of an Optimal Structure of Tree Parity Machine for the Establishment of a Cryptographic Key", Security and Communication Networks, vol. 2019, Article ID 8214681, 10 pages, 2019.

On the Development of an Optimal Structure of Tree Parity Machine for the Establishment of a Cryptographic Key

Academic Editor: Kuo-Hui Yeh
Received30 Dec 2018
Revised20 Feb 2019
Accepted26 Feb 2019
Published18 Mar 2019


When establishing a cryptographic key between two users, the asymmetric cryptography scheme is generally used to send it through an insecure channel. However, given that the algorithms that use this scheme, such as RSA, have already been compromised, it is imperative to research for new methods of establishing a cryptographic key that provide security when they are sent. To solve this problem, a new branch known as neural cryptography was born, using a modified artificial neural network called Tree Parity Machine or TPM. Its purpose is to establish a private key through an insecure channel. This article proposes the analysis of an optimal structure of a TPM network that allows generating and establishing a private cryptographic key of 512-bit length between two authorized parties. To achieve this, the combinations that make possible to generate a key of that length were determined. In more than 15 million simulations that were executed, we measured synchronization times, the number of steps required, and the number of times in which an attacking TPM network manages to imitate the behaviour of the two networks. The simulations resulted in the optimal combination, minimizing the synchronization time and prioritizing security against the attacking network. Finally, the model was validated by applying a heuristic rule.

1. Introduction

Cryptography is the mathematical science and the discipline of writing messages in encoded text. Its purpose is to protect secrets from adversaries, interceptors, intruders, opponents, and attackers [1]. It also pays special attention on mechanisms that guarantee information integrity and deals with techniques for exchanging users authentication keys and protocols [2]. The most known method is the combination of the symmetric and asymmetric encryption and decryption algorithms. The asymmetric algorithm is used for the exchange of cryptographic keys and the symmetric algorithm is used for information encryption and decryption.

The RSA public key cryptosystem and the systems based on elliptic curves are the most common forms for public-key cryptography in the encryption and digital signature standards [3]. However, the security of the RSA algorithm depends on the length of the prime numbers used for factoring [4]. Thus, one of the main concerns of RSA is the demand for large keys in today’s cryptographic algorithms, since the product of two long prime numbers must be factored. This generates a bigger value than the original. Increasing the length of prime numbers increases security; however, the computational cost of factoring these numbers is also increased. For this reason, it is important to look for new methods of exchanging cryptographic keys in a secure way and with a relatively low computational cost. Hence, neural cryptography is born; this method uses a neural network called Tree Parity Machine (TPM). In this way, through two TPM networks with exactly the same structure, two users can establish a cryptographic key by exchanging the inputs and outputs of these networks, keeping the synaptic weights secret.

The aim of our study is to search for an optimal structure that minimizes the time and number of steps that two TPM networks require to establish a cryptographic key of -bit length. In addition, we also emphasize on an attacking network whose behaviour is difficult to imitate. For this, we created an algorithm using Python; this algorithm allows performing the simulations to test and find the optimal structure. A total of 15’041,100 simulations were conducted with all the possible structures of the network. Additionally, we used R and GNU Octave for statistical analysis. Finally, a heuristic rule was applied to validate the computed values (proposed by [5, 6]). As a result, it was possible to determine the optimal structure of the TPM network with an average amount of steps and a percentage of of success of a passive attack network and with a successful 0% of a geometric attack.

The main contribution of this study is that two users will be able to generate a cryptographic key based on certain probability according to the number of steps established at the beginning of the synchronization and with a very low probability of a successful passive attack.

The remainder of this document is structured as follows. Section 2 mentions the related works that supports the present study. Section 3 describes the procedure that helps determine the best combination. Section 4 shows the results obtained from the experiment. Finally, Section 5 details conclusions and proposes future work.

Concerning the studies related to this research, Kanter et al. [7] and Rosen-Zvi et al. [8] demonstrated that when two artificial neural networks are trained on their outputs according to a learning rule, they are able to develop equivalent states of their internal synaptic weights. Kinzel and Kanter [9] and Ruttor et al. [10] revealed that the possibility of a successful attack decreases as the synaptic depth of the network increases and the computational cost of the attacker increases, since its effort grows exponentially while the effort required by the users grows polynomially. Similarly, Sarkar and Mandal [11] and Sarkar [12] mention that the performance is improved by increasing the synaptic depth of the TPM networks, henceforth counteracting the brute force attacks with the current computation. In comparison with the present study, we used the structure of the TPM network proposed by the latter. However, we determined that the security of the TPM network can also be increased by increasing the number of neurons in the hidden layer and also the number of entries of each neuron. In addition, the range for the synaptic depth value has been modified to adapt the need to generate a key of bits.

With respect to the proposed algorithms for the design of the TPM network, Lei et al. [13] developed a two-layer, prepowered TPM network model. A fast synchronization can be achieved by increasing the minimum value of the internal representations Hamming distance and by reducing the probability of a step that does not modify networks weights. Allam et al. [14] proposed an algorithm that increases the security of neural cryptography by authenticating communications using previously shared secrets; in this way, it increases the security of neural cryptography. As a result, it shows that the algorithm reaches a very high security level without increasing synchronization time. Ruttor [15] mentions that the value of K is 3, since lower values have negative consequences from a safety point of view and higher values have negative consequences in terms of synchronization time. Klimov et al. [16] calculate the probability that two networks take to either synchronize their weights or not. Although the probability rises with a low value of K, the success of the attacking network also increases, and consequently the value of K must be greater. In comparison with our study, the initial structure of the TPM networks is random and different and has a single output. In addition, because the goal is to generate a 512-bit key, it was not arithmetically adequate to use odd numbers for K, N, and L values.

In relation to measurements on TPM networks performance, Dolecki and Kozera [17] present a method of frequency analysis that allows evaluating the synchronization level of two TPM networks before they finish up, with the calculated value not related to the difference of their synaptic weights. As a result, the selection of the appropriate range for the count frequency and the threshold allow them to specify whether it is a short or a long synchronization. Santhanalakshmi et al. [18] and Dolecki and Kozera [19] analyse and compare the performance of synchronization by employing, respectively, a genetic algorithm and a Gaussian distribution instead of uniform random values for the weights of the TPM networks. As a result, they found that replacing the random weights with optimal weights helps reduce the synchronization time. In addition, increasing the number of hidden and input neurons accelerates convergence and also reduces the probability of success of the “Majority Flipping Attack” attack. Dolecki et al. [20] perform an adjustment of the timing distribution of two TPM networks to a Poisson distribution. Pu et al. [21] perform an algorithm that combines “true random sequences” (generated by artificial and validated circuits with randomness tests) with TPM networks, demonstrating more complex dynamic behaviours that offer better performance as an encryption tool and resistance to attacks. In comparison with our study, the synaptic weights values of the TPM networks are generated according to a discrete uniform distribution. Additionally, the Poisson distribution adjustment was made with the results in the number of steps in each of the simulations.

With reference to rules that contribute in the design of TPM networks, Mu and Liao [5] and Mu et al. [6] define the following heuristic rule: “Keeping the equations of motion constant, a high value in the state classifier with respect to the minimum values of the smallest Hamming distances between the state vectors of the networks, has a high probability of fulfilling the condition that the average change in the percentage difference between synaptic weights is greater than zero, improving the security of neuronal cryptography”. For our study, we used the proposed heuristic rule to determine the level of security of the final structure of the TPM network.

In regard to modification of initial structures of a TPM network, Gomez et al. [22] state that, with an initial misalignment in the weights between and , the synchronization time is reduced from to less than . This also reduces the number of steps from to less than . Within this context, Niemiec [23] presents a new idea for a key reconciliation method in quantum cryptography using TPM networks, correcting errors that occur during transmission in the quantum channel. The number of steps necessary to establish the key is significantly reduced with a low value in the error rate of the quantum bit and a high value in the initial percentage of synchronization of the two networks. In comparison with our study, we did not use initial structures that presented an initial alignment with an established percentage. This initial alignment should be delivered and shared by the two users, which implies that an attacker has more initial information about the structure of the TPM networks.

3. Materials and Methods

3.1. Background

A TPM is a neural network that is formed by a hidden layer and a single output. The general structure consists of the triad of values , , and , where is the number of neurons in the hidden layer, is the number of entries of each neuron in the hidden layer, and sets the limit of the range of possible integer values related to the synaptic weights.

To generate and establish a key of bits, some variations of the TPM structure whose synaptic weights allow this key length were tested. To determine it, the value of was taken as a base, which, in its binary notation, establishes the final length of the key. To establish pair values in the key length, the limits of the range of values of to whose maximum value in decimal notation is were modified, and the length of its binary value is a multiple of . Thanks to this, there are possible values of that allow generating keys of bits; see Table 1.

Decimal value Binary length


For each value of , there are values of and such that 512, considering the restriction , as shown in Table 2.

1 512
2 256
4 128
8 64
16 32

1 256
2 128
4 64
8 32
16 16

1 128
2 64
4 32
8 16

1 64
2 32
4 16
8 8

1 32
2 16
4 8

1 16
2 8
4 4

1 8
2 4

1 4
2 2

1 2

1 1

The restriction of is due to the fact that if , the neural network will be able to reach values that make the synchronization not to be possible. Then, there are possible combinations of , , and values that allow us to generate a key of -bit length.

To find the optimal combination that allows rapid synchronization in a minimum number of steps, which prevents an attacker from being able to copy their behaviour, 500,000 simulations were performed for each combination. Each simulation consisted of three TPM networks, two networks synchronized their weights in an authoritative way (Alice and Bob), and another unauthorized attacking network (Eve), they tried to imitate the behaviour of the other two networks to discover the key they tried to establish. For simulation, the values of greater than (, , y ) were not considered, due to its length and complexity in the calculation.

3.2. Implementation of the Algorithm and the Attack Model

We used Python to implement an algorithm to measure the time and the number of steps needed for synchronization, in addition to calculating the cumulative values of the number of steps during simulations (see Algorithm 1).

Data: , , , number of simulations
Result: none
1 initialization;
2 for to do
3  Alice TPM (, , );
4  Bob TPM (, , );
5  Eve TPM (, , );
6  steps 0;
7  while != do
8   input randomVector ();
9   outA Alice(input);
10   outB Bob(input);
11   outE Eve(input);
12   if then
13    Alice.update(outB);
14    Bob.update(outA);
15   end
16   if then
17    Eve.update(outA);
18   end
19   steps steps+1;
20 end
21saveToFile (steps);
22 end

As can be seen, in Algorithm 1, the second line defines the total of simulations that will be executed. Lines to initialize the three TPM networks with , , and . Line initializes the step counter to . Line establishes that the simulation will not finish while the weights of both networks are not equal. In line , we create a random vector that will be the input for the networks. Lines to compute the outputs of the three networks. Line analyses whether the outputs of Alice and Bob networks are the same. In this case, lines and update weights of both networks according to the previously computed outputs. Line 16 allows Eve to update her weights as shown by line , but only when the outputs of the three networks are equal. Line increases the step counter by . Finally, line saves the number of steps in a file.

At a glance from the algorithm, attacks occur when the outputs of the three TPM networks match. Eve can only listen to messages that are sent between Alice and Bob, and its learning process is slower since it only updates her weights if its output matches the other two outputs. According to [24], a geometric attack has better results when an attacker uses a single TPM network to mimic the behaviour of the other two authorized networks. Therefore, additional 180,000 simulations were conducted in a geometric attack. The condition is taken into account when the outputs of the two authorized networks are equal, and the output of the attacking network is different. In this case, the attacking network can modify its internal representation by adjusting the partial output with the lowest absolute value by its counterpart.

For this purpose, we designed and included Algorithm 2. In this algorithm, the third line calculates the absolute values of the partial outputs of each hidden neuron, using the input values and the weights of the attacking network. Line 4 calculates the sign of each of the partial values of each hidden neuron. In line 5, the sign corresponding to the position of the minimum absolute value of the partial output is changed. Line 6 calculates the value of the output of the network with the new partial outputs. Finally, line 7 updates the network weights.

Data: outA, outB, outE, input, weightE
Result: none
1 initialization;
2 if != then
3  sigmaAbs abs(sum(input weightE));
4  sigmaSign sign(sum(input weightE));
5  sigmaSign[sigmaAbs.min()]
6  tau prod(sigmaSign);
7  update(tau);
8 end

4. Results and Discussion

4.1. Results of the Simulations

To perform and obtain the results of the simulations, a DELL Inspiron 5759 computer with a 2.50GHz sixth generation Intel Core i7-6500U CPU (model 78) was used. It has four CPUs, one socket, two cores per socket, and two processing threads per core, cache memory L1d of 32K, L1i of 32K, L2 of 256K, and L3 of 4096K.

Table 3 shows the results obtained from the simulations performed for each combination of , , and . Columns Min Steps and Max Steps show the minimum and the maximum number of steps that took for the two networks to synchronize their weights. The Average Steps column shows the average of steps of all the executed simulations. Columns Min Time and Max Time show the minimum and maximum time (in seconds) that took for the two networks to synchronize their weights. The Average Time column shows the average time (in seconds) of all the executed simulations. Column Eve sync successfully shows the number of occurrences in which the attacking network managed to imitate the behaviour of the other two networks of the total of executed simulations. Finally, column % Eve sync successfully shows the percentage represented by the previous column with respect to the total of simulations.

ItemCombination (K, N, L)Minimum stepsMaximum stepsAverage stepsMinimum time (sec)Maximum time (sec)Average time (sec)Eve’s successful sync% Eve’s successful sync

1(1, 512, )5289.33670.06572.27180.425936868073.736
2(2, 256, )5259.33120.05961.55880.425336822973.6458
3(4, 128, )5329.33740.06802.12750.426336847173.6942
4(8, 64, )4279.33700.06561.90890.423336865273.7304
5(16, 32, )5329.33560.06611.46250.424936838973.6778
6(1, 256, )84515.54500.05731.46650.382435539771.0794
7(2, 128, )82537115.77740.0523537.52990.385734949569.899
8(4, 64, )8410418.02260.049195.85120.396530869161.7382
9(8, 32, )981431.60460.040319.69150.459716114832.2296
10(16, 16, )1558773.48670.02897.36280.538181021.6204
11(1, 128, )117825.32840.04721.11140.298313271526.543
12(2, 64, )1528965.27060.04112.79240.4699414758.295
13(4, 32, )26616128.15820.03094.53530.53254870.0974
14(8, 16, )38769211.65900.02626.04760.574100.0
15(1, 64, )107125.03180.01800.63890.167913266626.5332
16(2, 32, )1537274.99430.03621.98750.3599391117.8222
17(4, 16, )23712137.95690.01932.71790.45727590.1518
18(8, 8, )30955218.22110.03423.44900.510900.0
19(1, 32, )76821.39210.01050.29130.084714446128.8922
20(2, 16, )1045169.81880.01961.23010.21616162812.3256
21(4, 8, )14777132.08050.01812.79930.320040800.816
22(1, 16, )68424.94280.00200.23660.0549462449.2488
23(2, 8, )10804100.59990.01211.41930.1842250065.0012
24(4, 4, )9837134.27860.00322.06440.228191681.8336

As can be seen in Table 3, items and corresponding to the combinations and , respectively, got the best results from the point of view of security against the attacking network (Eve). None of the 500,000 simulations allowed Eve to imitate the behaviour of the other two networks with the same number of steps of Alice and Bob’s networks. However, two combinations took a greater number of steps to achieve synchronization ( and , respectively) than the other combinations, which involves a longer average time.

For synchronization time, item corresponding to combination produced the best result, with an average synchronization time of seconds. However, it got an attacking network sync percentage of . Given that security was prioritized in the present study, we considered the combinations mentioned above as good results. It should also be noted that the combinations, where , despite having the best times in synchronization, produced the worst results because the attacking network Eve managed to imitate the behaviour of the other two networks in approximately of the total simulations.

To determine the best combination of the two previous situations, specifically, with better results from the security point of view, we performed one million simulations for each, and we got the following results; see Table 4.

ItemCombination (K, N, L)Minimum stepsMaximum stepsAverage stepsMinimum time (sec)Maximum time (sec)Average time (sec)Eve’s successful sync% Eve’s successful sync

1(8, 16, ) 40 785 211.5888 0.0780 1.5155 0.4058 0 0
2(8, 8, ) 27 861 218.3696 0.0312 0.9229 0.2385 2 0.0002

From Table 4, we can determine the best combination for the triad (, , ) is . From Table 4, it was determined that the best combination for the trio of values (, , ) is . Simulations were carried out using geometric attack and the results were obtained as shown in Table 5.

Combination (K, N, L)Number of simulationsEve’s successful sync% Eve’s successful sync

(8, 16, ) 180,000 0 0.0

More simulations with the combination of were conducted to determine the evolution of the distribution in the number of steps. Of all simulations, we totalled the number of steps that were necessary for the two TPM networks to synchronize. See the histograms shown in Figure 1.

During additional simulations, on a single occasion, Eve’s attacking network managed to synchronize its weights with the other networks. Table 6 shows the probabilities of a successful attack of both combinations, compared to the total of performed simulations.

Combination (K, N, L)Number of simulationsEve’s successful sync% Eve’s successful sync

(8, 16, ) 2’361,100 1 0.00004
(8, 8, ) 1’500,000 2 0.0001

4.2. Adjustment of Probabilities Distribution and Calculation

As can be seen in Figure 1, the number of steps used in all simulations follows a positive asymmetric discrete distribution or to the right. For the present work, the Poisson distribution was considered by the characteristics of the mathematical parameters, such as lambda and delta, where delta was considered the maximum value of the limit as a function of the total simulations. Delta is the probability of occurrences of the event. Poisson has the formula as shown by the equation ref Poisson.

Then, for the first test, we used the results obtained from the 1,000 simulations. The maximum value of the limit is 515, where

Using (1), taking as delta value = 0.515 of the equation ref delta 1 (lambda equal to delta to obtain the highest probability), a probability of 0.4787 was obtained. Similarly, the probabilities were calculated for the rest of the simulations. For the 10 thousand simulations, a delta value = 0.0634 (6.34 %) was obtained and its probability is 0.8147. Therefore, lambda varies by 34 %. In an analogous way, the same test was carried out with the results of the other simulations. Table 7 shows the obtained values.

Number of simulationsMax. limitDeltaProbabilityFunction

1,000 515 0.5150 0.4787
10,000 634 0.0634 0.8147
100,000 709 0.00709 0.9625
250,000 750 0.0030 0.9814
500,000 718 0.0014 0.9902
1’000,000 785 0.0007 0.9946

As the number of simulations increases, the probability has a tendency towards 1 (maximum probability value), which implies that the tests performed are correct. The same can be seen in Figure 2. As can be seen, the data tend to adjust mathematically to a Poisson distribution.

4.3. Generation of the Cryptographic Key of 512 Bits

When the networks finish their synchronization, their synaptic weights will have values in the range . To obtain the key of bits, the value of is added to each synaptic weight; this produces that the range is . Then, each synaptic weight is transformed to its binary equivalent of bits in length. Each of these values is concatenated and the key of 512-bit length is obtained ().

4.4. Process of Validation

To validate the security of the proposed TPM network structure with the values of , , and , and, according to the heuristic rule proposed by Mu and Liao [5] and Mu et al. [6], the value of the state classifier was computed. For this, the Hamming distances were calculated between all the possible vectors whose outputs are equal ( and ). Since , there are possible vectors ( vectors with output and vectors with output ). The combinations for each group according to their output are given by the binomial coefficient, as shown by

Therefore, there are combinations for each group. The value of the state classifier was computed as follows. First, Hamming distances were calculated between each pair of vectors whose output is equal to

Then, for each group, the minimum value of the said distances is calculated:

Finally, the value of the state classifier is calculated as the smallest value between the minimum values of the Hamming distances:

The value of shows that it meets the probability that the average change in the percentage difference between synaptic weights is greater than zero. This implies that the synchronization time is increased polynomially by increasing the value of . This means that the proposed structure is secure.

4.5. Process of Analysis of the Values of , , and

The probability of success of an attacking network has a very noticeable dependence with respect to the values of , , and , as shown in Table 3. Therefore, an analysis of the values of , , and was conducted with respect to their probability. All possible combinations were plotted with their respective probabilities. To do this, we used the GNU Octave scatter3 function to draw a scatter diagram. Algorithm 3 presents the code that generated this graph.

Data: , , ,
Result: Graph
1 initialization;
2 scatter3 ((:), (:), (:), [16], (:), ‘.’);
3 set(gca,‘zscale’,‘log’);
4 xlabel “K”;
5 ylabel “N”;
6 zlabel “L”;
7 colorbar;

Algorithm 3 starts with the values of , , and the probabilities of Eve in each combination. Line 2 draws a 3D scatter diagram with the values of , , and . The value of indicates the size of the points to be plotted. The value of reveals the value of the probabilities that will be every point colour. Line 3 changes the scale of the axis (value of ) to a logarithmic scale to improve visualization. Lines 4 to 6 change the labels of the axes to their respective values. Finally, line 7 discloses a colour bar. Figure 3 displays the result of Algorithm 3.

Figure 3 shows how the probabilities change according to the values of , , and . On the axis are the values of , on the axis the values of , and on the axis the values of . The colour of each point displays the probability of success of the attacking network. A low probability is better (blue) and a high probability is worse (red).

As can be seen in Figure 3, the value of does influence much on probability. On the other hand, a high value of has a negative impact on the result. Finally, a very low value of has a negative impact. For this reason, to propose values of , , and , a relatively low value of is recommended, but it has to be higher than . The value of should not be too low. These conditions ensure a very low probability of a successful passive attack.

5. Conclusions

This article has been developed in the context of the design of a TPM neural network that allows generating a -bit key between two authorized parties. To do this, the optimal values of , , and were computed through simulations that allowed prioritizing security against an attacking network. This attacking network, through a passive attack, tried to imitate the behaviour of the other two networks, achieving the synchronization with the other two networks on a single occasion. This demonstrates that a secret key can be exchanged between two authorized parties in a network with a probability of so that an attacking network can mimic the original network behaviour. In addition, a geometric attack was designed and executed, with successful results of 0%. Furthermore, if the values of , , and of a TPM neural network are modified, cryptographic keys of various lengths can be obtained. These values also determine how easy or how difficult it might be for an attacking network to mimic the behaviour of the other two networks. According to our results, it is recommended that the values of , , and meet the following conditions: and .

As future work, we plan the analysis of L values that were not considered in this study due to software and hardware limitations (, , , and ), in which security is also prioritized in front of an attacking network. Also we plan studying the combination of neural cryptography using the values calculated in the study, along with symmetric cryptographic systems that need a cryptographic key established between two parties (such as DNA-based cryptography). This in order to perform secure communications and without the need to have sent its keys through insecure means. Finally, it is recommended to perform a more exhaustive security analysis, to the values found for the structure of the TPM neural network, namely, specific attack models designed to manage the cryptographic key, especially in its distribution.

Data Availability

The data used to support the findings of this study (CSV, TXT, and PNG) have been deposited in the Google Drive repository. In the following link, you can find the data used in our manuscript:

Conflicts of Interest

The authors declare that there are no conflicts of interest.


The funding of this scientific research is provided by the Ecuadorian Corporation for the Development of Research and the Academy (RED CEDIA) and also from the Mobility Regulation of the Universidad de las Fuerzas Armadas ESPE, from Sangolquí, Ecuador. The authors would like to thank the financial support of the Ecuadorian Corporation for the Development of Research and the Academy (RED CEDIA) in the development of this study, within the Project Grant GT-Cybersecurity.


  1. F. L. Bauer, “Cryptology,” in Encyclopedia of Cryptography and Security, pp. 283-284, Springer, 2011. View at: Google Scholar
  2. Y. Lindell and J. Katz, “Introduction to Modern Cryptography,” Chapman and Hall/CRC, 2014. View at: Google Scholar
  3. X. Zhou and X. Tang, “Research and implementation of RSA algorithm for encryption and decryption,” in Proceedings of the 6th International Forum on Strategic Technology, IFOST 2011, pp. 1118–1121, IEEE, China, August 2011. View at: Google Scholar
  4. F. Meneses, W. Fuertes, J. Sancho et al., “RSA Encryption Algorithm Optimization to Improve Performance and Security Level of Network Messages,” IJCSNS, vol. 16, no. 8, p. 55, 2016. View at: Google Scholar
  5. N. Mu and X. Liao, “An approach for designing neural cryptography,” in International Symposium on Neural Networks, pp. 99–108, Springer, Heidelberg, Berlin, Germany, 2013. View at: Publisher Site | Google Scholar
  6. N. Mu, X. Liao, and T. Huang, “Approach to design neural cryptography: A generalized architecture and a heuristic rule,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 87, no. 6, Article ID 062804, 2013. View at: Publisher Site | Google Scholar
  7. I. Kanter, W. Kinzel, and E. Kanter, “Secure exchange of information by synchronization of neural networks,” EPL (Europhysics Letters), vol. 57, no. 1, pp. 141–147, 2002. View at: Publisher Site | Google Scholar
  8. M. Rosen-Zvi, I. Kanter, and W. Kinzel, “Cryptography based on neural networks---analytical results,” Journal of Physics A: Mathematical and General, vol. 35, no. 47, pp. L707–L713, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  9. W. Kinzel and I. Kanter, “Interacting neural networks and cryptography,” in Advances in Solid State Physics, pp. 383–391, Springer, 2002. View at: Google Scholar
  10. A. Ruttor, W. Kinzel, R. Naeh, and I. Kanter, “Genetic attack on neural cryptography,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 73, no. 3, Article ID 036121, 2006. View at: Google Scholar
  11. A. Sarkar and J. K. Mandal, “Cryptanalysis of key exchange method in wireless communication,” Compare, vol. 1, 2015. View at: Google Scholar
  12. A. Sarkar, “Genetic Key Guided Neural Deep Learning based Encryption for Online Wireless Communication (GKNDLE),” International Journal of Applied Engineering Research, vol. 13, no. 6, pp. 3631–3637, 2018. View at: Google Scholar
  13. X. Lei, X. Liao, F. Chen, and T. Huang, “Two-layer tree-connected feed-forward neural network model for neural cryptography,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 87, no. 3, Article ID 032811, 2013. View at: Publisher Site | Google Scholar
  14. A. M. Allam, H. M. Abbas, and M. W. El-Kharashi, “Authenticated key exchange protocol using neural cryptography with secret boundaries,” in Proceedings of the 2013 International Joint Conference on Neural Networks, IJCNN 2013, pp. 1–8, USA, August 2013. View at: Google Scholar
  15. A. Ruttor, “Neural synchronization and cryptography,” Disordered Systems and Neural Networks, 2007. View at: Google Scholar
  16. A. Klimov, A. Mityagin, and A. Shamir, “Analysis of neural cryptography,” in Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, pp. 288–298, Springer, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  17. M. Dolecki and R. Kozera, “Threshold method of detecting long-time TPM synchronization,” in Computer Information Systems and Industrial Management, vol. 8104 of Lecture Notes in Computer Science, pp. 241–252, Springer Berlin Heidelberg, Berlin, Heidelberg, 2013. View at: Publisher Site | Google Scholar
  18. S. Santhanalakshmi, K. Sangeeta, and G. K. Patra, “Analysis of neural synchronization using genetic approach for secure key generation,” Communications in Computer and Information Science, vol. 536, pp. 207–216, 2015. View at: Publisher Site | Google Scholar
  19. M. Dolecki and R. Kozera, “The Impact of the TPM Weights Distribution on Network Synchronization Time,” in Computer Information Systems and Industrial Management, vol. 9339 of Lecture Notes in Computer Science, pp. 451–460, Springer International Publishing, Cham, Swizerland, 2015. View at: Publisher Site | Google Scholar
  20. M. Dolecki and R. Kozera, “Distribution of the tree parity machine synchronization time,” Advances in Science and Technology – Research Journal, vol. 7, no. 18, pp. 20–27, 2013. View at: Publisher Site | Google Scholar
  21. X. Pu, X.-J. Tian, J. Zhang, C.-Y. Liu, and J. Yin, “Chaotic multimedia stream cipher scheme based on true random sequence combined with tree parity machine,” Multimedia Tools and Applications, vol. 76, no. 19, pp. 19881–19895, 2017. View at: Publisher Site | Google Scholar
  22. H. Gomez, Ó. Reyes, and E. Roa, “A 65 nm CMOS key establishment core based on tree parity machines,” Integration, the VLSI Journal, vol. 58, pp. 430–437, 2017. View at: Publisher Site | Google Scholar
  23. M. Niemiec, “Error correction in quantum cryptography based on artificial neural networks,” Cryptography and Security, 2018. View at: Google Scholar
  24. A. Ruttor, W. Kinzel, and I. Kanter, “Dynamics of neural cryptography,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 75, no. 5, 056104, 9 pages, 2007. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2019 Édgar Salguero Dorokhin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.