Security and Communication Networks

Security and Communication Networks / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 5328059 | https://doi.org/10.1155/2020/5328059

Qinju Liu, Xianhui Lu, Fucai Luo, Shuai Zhou, Jingnan He, Kunpeng Wang, "SecureBP from Homomorphic Encryption", Security and Communication Networks, vol. 2020, Article ID 5328059, 9 pages, 2020. https://doi.org/10.1155/2020/5328059

SecureBP from Homomorphic Encryption

Academic Editor: Stelvio Cimato
Received18 Nov 2019
Revised16 May 2020
Accepted26 May 2020
Published12 Jun 2020

Abstract

We present a secure backpropagation neural network training model (SecureBP), which allows a neural network to be trained while retaining the confidentiality of the training data, based on the homomorphic encryption scheme. We make two contributions. The first one is to introduce a method to find a more accurate and numerically stable polynomial approximation of functions in a certain interval. The second one is to find a strategy of refreshing ciphertext during training, which keeps the order of magnitude of noise at .

1. Introduction

Driven by massive amounts of data and the high scalability, versatility, and high efficiency of cloud computing, modern machine learning (ML) has been widely used in many fields, including health care, military, and finance [13]. These fields often contain a large amount of sensitive data, so how to protect the data privacy while using them becomes a very important problem. At present, there exist various approaches that can be used to protect data privacy. Differential privacy (DP), secure multiparty computation (MPC), and homomorphic encryption (HE) are the most widely used methods for this problem.

DP allows one to control the amount of information leaked from an individual record in a dataset. By using DP, one can ensure privacy for any entity whose information is contained in the dataset as well as to create models that do not leak this information about the data they were trained on. Therefore, DP is mainly used in the training process. However, we are more concerned about how to use cryptographic methods to protect data privacy.

Most MPC methods establish a communication protocol among the parties involved such that if the parties follow the protocol, then they will end with the desired results while protecting the security and privacy of their respective assets [47]. However, due to the large scale of data used in machine learning, the communication cost of MPC is very high.

HE is also another major method to protect data privacy, which allows us to perform certain arithmetic operations on encrypted data without decryption. Fully homomorphic encryption (FHE) (it allows us to perform arbitrarily complex and efficiently computable evaluations over encrypted data without decrypting them) was originally introduced by Rivest et al. in 1978 [8]. But it had been an open problem until Gentry presented the first plausible candidate FHE construction based on ideal lattices in 2009 [9]. Since then, a series of works [1015] have been proposed to improve the security assumptions and efficiency of FHE, following Gentry’s blueprint. Currently, some public libraries are available (Table 1), namely, HElib [16] and SEAL [17] based on BGV scheme [18] and FHEW [19] and TFHE [20, 21] based on GSW scheme [14] and HEAAN [22] based on CKKS scheme [23]. The BGV-based schemes can handle a lot of bits at the same time, so they can pack and batch many operations in the SIMD manner. However, the set of operations that are efficient with BGV depends on the section of the parameter set. The GSW-based schemes can use Boolean circuits to deal with nonlinear operations quickly, but their computational efficiency of arithmetic operations is relatively low. The CKKS-based schemes can perform efficient approximate arithmetic operations on encrypted data by introducing a novel encoding technique and a fast rescale operation, but they cannot deal with nonpolynomial operations. It is widely used in machine learning due to its high efficiency in arithmetic operations (which is why we chose the CKKS scheme for our SecureBP model).


Based schemeLibraryPlaintextOperation

BGVHElib, SEALFinite field packingAddition, multiplication
GSWFHEW, TFHEBinary stringLook-up table
CKKSHEAANReal/complex packing numbersFixed-point arithmetic

Two important use-cases for machine learning models are predictions-as-a-service (PaaS) setting and training-as-a-service (TaaS) setting. In the PaaS setting, a large organization (or the cloud) uses its proprietary data to train machine learning models. The organization now hopes to monetize the model by deploying services that allow users to upload their inputs and receive predictions for price. In the TaaS setting, the organization makes profits by deploying services that allow users to upload their encrypted inputs and receive the encrypted machine learning model. Moreover, in this setting, since the process of training an encrypted model is time- and resource-consuming, the techniques and proprietary tools for the training algorithm are often considered critical intellectual property by its owner, who is typically not willing to share them.

BP [24] is one of the most classical and widely used neural network models. It is more powerful than linear regression and logistic regression models. Moreover, the BP network already has the basic module of deep neural network (DNN); in other words, the BP network is the cornerstone of DNN. Therefore, when we study the data privacy protection of machine learning, it is appropriate to take the BP network model as the breakthrough point.

1.1. Our Contributions

In this paper, we present a secure backpropagation neural network model (SecureBP) based on HE. In this model, in a setup phase, the data owner (user) encrypts his data and sends them to the cloud. In the computation phase, the cloud can train the model on the encrypted data without learning any information beyond the ciphertext of data. Technically, we have two main contributions: a more accurate polynomial approximation technique and a lightweight interactive scheme to refresh ciphertexts during training.

We focus on the TaaS setting in this paper and we choose HE (i.e., CKKS scheme) as the method to protect user’s data. For clarity, let us review the technical challenges and difficulties of using HE for the BP network in the TaaS setting. Firstly, in the BP network, each node is activated before output by an activation function, which is usually selected by nonpolynomial functions, such as sigmoid, tanget-hyperbolic (tanh), or rectified linear unit (ReLU). However, most existing HE schemes is that they only support polynomial arithmetic operations. The evaluation of the activation function is an obstacle for the homomorphic implementation of the BP network since it cannot be expressed as a polynomial. In addition, in order to ensure security, HE introduces some noise in encryption, and the noise increases as the homomorphic computation proceeds. When the noise reaches a certain threshold, the decryption error will occur. Therefore, in view of the abovementioned technical difficulties, we make the following two contributions.

The first contribution is that by using Chebyshev polynomials (in fact, several studies have suggested this approach, but none have examined it in detail), we introduce a more accurate polynomial approximation of sigmoid function for a certain interval. Compared with Taylor polynomials, our method causes more similarities of derivatives with the sigmoid function (see Section 3.1).

The second contribution is that we propose a lightweight interaction protocol, which is a novel strategy to refresh ciphertext during training. The trivial way to deal with the growing noise is bootstrapping. However, bootstrapping comes with high computational overhead. To avoid costly bootstrapping of HE, we present the lightweight interaction protocol during training. By this method, on the one hand, no technical information of the cloud training model is provided to the user. On the other hand, the noise of weight ciphertext grows linearly after it grows to a certain value.

Now that the basic ingredients are in place, we construct our SecureBP network. To demonstrate the feasibility of our SecureBP, we estimate its performance on three datasets: Iris dataset, Diabetes dataset and Sonar dataset (see Table 2), which are from the University of California at Irvine (UCI) dataset repository [25].


DatasetAccuracy (%)Time (ms)

Iris79.67632.7
Diabetes65.19962.1
Sonar82.232.0993 × 108

1.2. Related Work

Before the current work, there have been some researches on privacy-preserving machine learning algorithm [2629]. These papers propose solutions based on MPC and HE techniques (see Table 3), but they appear to incur some problems.


SettingPrior workProblemActivationTechnique

PaaSDeepSecure [29]DNNReLU, ReLU, sigmoidMPC
Gazelle [27]CNNReLUMPC, HE
CryptoNets [30]CNNSquare functionSHE
FHE-DiNN [26]DiNNSignFHE
Chameleon [28]DNNReLU, sigmoidMPC, HE

TaaSSecureML [27]LR, NNSigmoid, SoftmaxMPC
[31]LRLeast squares approximationHE
OursBPHE

Privacy-preserving machine learning via MPC provides a promising solution by allowing different parties to train various models on their joint data without revealing any information beyond the outcome. They require interactivity between the party that holds the data and the party that performs the blind classification. Even though practical performances of MPC-based solutions have been impressive compared to FHE-based solutions, they incur other issues such as network latency and high bandwidth usage. Because of these downsides, HE-based solutions seem more scalable for real-life applications.

Privacy-preserving machine learning based on HE is more challenging. As we mentioned before, the standard activation function is a challenge in applying HE to the machine learning algorithm. Faced with this challenge, Ran Gilad-Bachrach et al. [30] propose a solution (CryptoNets) where instead of standard activation function, they use a square function. Homomorphic computation depends on the total number of levels required to implement the network and results in a relatively high computational overhead which bounds CryptoNets practicability in resource-limited settings where the data owners have severe computational constraints. Moreover, the inherent limitation of most existing HE constructions is that they only support the arithmetic operations over modular spaces. Therefore, their approaches required the size of parameter for real number operations (i.e., no modular reduction over plaintext space) which is too large to be practically implemented.

1.3. Organization

Section 2 briefly introduces some notations and reviews the framework of BP. Section 3 describes our SecureBP model. In section 4, we estimate our model and discuss the estimation and implementation results.

2. Preliminaries

2.1. Notations

All logarithms are base 2 unless otherwise indicated. During homomorphic operations, we use to denote the multiplication between ciphertexts; denotes the addition between ciphertexts and denotes the scalar multiplication between a constant and a ciphertext.

Next, we introduce some signs used in the BP network:(i), the input value.(ii), the output value of hidden layer.(iii), the output value of output layer.(iv), the weight connecting the hidden-layer j-th node and the input-layer i-th node.(v), the weight connecting the output-layer k-th node and the hidden-layer j-th node.(vi), the bias of hidden-layer j-th node.(vii), the bias of output-layer k-th node.(viii)L, the learning rate.

2.2. The Framework of BP

In this subsection, we give a brief review of one version of the BP network. For ease of presentation, in this paper, we only consider a neural network of three layers (input layer, hidden layer, output layer). It is trivial to extend our work to the multilayers network. This configuration can be seen from Figure 1.

In the BP algorithm, there is one forward phase and one backward phase during each iteration. Then, the whole BP algorithm can be described in Algorithm 1.

(1) Set the number of iterations, weight matrix W, and bias vector b to small random initial values.
(2) Repeat
(3) Forward phase: beginning with the input nodes, compute weighted sums, and activation function for all nodes;
(4) Backward phase: compute gradients for all nodes starting from output nodes;
(5) Adjust the weights and biases
(6) until the number of iterations reaches a preset value.

The forward phase starts from the input layer and approaches the output layer. During this phase, weighted sums and activations are computed for every node in each layer using the activation function, which is normally the sigmoid function. That is,

The backward phase starts from the output layer and descends toward the bottom layer (i.e., the input layer) of the network to compute gradients. Finally, we need to update the weights ( ) and biases (, ) using the computed gradients. The rules for updating are as follows:where and .

3. SecureBP Based on CKKS Scheme

In this section, we explain how to securely train the BP network model using the CKKS scheme.

3.1. A Decent Polynomial Approximation

In the preceding update formula, except for activation function inside neurons (i.e., sigmoid function ), all other operations in BP network are addition and multiplication, so they can be implemented over encrypted data. One limitation of the existing HE schemes is that they only support polynomial arithmetic operations. The evaluation of the activation function is an obstacle for the implementation of the BP network since it cannot be expressed as a polynomial. Hence, in order to operate a complete BP neural network over encrypted data, we replace the sigmoid function with polynomial approximations that are compatible with practical HE schemes.

Actually, the Taylor polynomials have been commonly used for approximation of the sigmoid function [3234]:

However, we observe that the size of error grows rapidly as increases. Besides, in order to guarantee the accuracy of the BP network, we have to use a higher degree Taylor polynomial, but it requires too many homomorphic multiplications to be practically implemented. In summary, although Taylor expansions are more convenient and easier to compute, the accuracy of estimation is not always consistent because it is a local approximation near a certain point. Therefore, we introduce another good candidate for approximation with better approximation ability to replace the sigmoid function: optimal and uniform polynomial approximation of . Not exactly, we find a polynomial function that minimizes the absolute value of the error between and within a given interval.

The Chebyshev polynomials are used to construct the optimal uniform approximation polynomials . The Chebyshev polynomials can be simply defined as for ,

From the abovementioned definition, we can get two important properties of Chebyshev polynomials. The first property is that we can get a recurrence of Chebyshev polynomials . The second is that the Chebyshev polynomial has n different zero points on the interval , i.e., . Then, we can get the important theorem in polynomial approximation as follows.

Theorem 1. Let be a continuous differentiable function on interval , be the interpolation polynomial, and its interpolation nodes are the zero points of Chebychev polynomial , then is the optimal and uniform polynomial approximation of on interval , and

Proof. Therefore, it can be seen from the abovementioned theorem that to find the optimal uniform approximation polynomial of on the interval , we only need to set the interpolation node of as the zero point of Chebychev polynomial . For the function on an interval , we can take the transformation , so that . Then, we can apply Theorem 1 to . We note that compared with Taylor polynomials, this method of polynomial approximation causes more similarities of derivatives with the sigmoid function, which might help produce a better model (see Figure 2).
To justify our claims, we compare the accuracy of the produced BP neural network model using different activation functions with the Iris dataset (see Table 4).


Iteration50 (%)100 (%)150 (%)180 (%)200 (%)220 (%)240 (%)260 (%)280 (%)300 (%)

33.3333.3340.6774.6783.3384.0084.0084.6782.6785.33
33.3333.3340.6751.3381.3384.0084.0084.0082.6782.67
33.3333.3345.3354.6758.0088.0090.0090.0091.3091.30

3.2. Our SecureBP Network Model

In this section, we explain how to perform the lightweight interactive protocol to refresh ciphertexts during the training phase. To be precise, we explicitly describe a full pipeline of the evaluation of the SecureBP. We adopt the same assumptions as in the previous section so that the whole database can be encrypted in m ciphertexts.

First of all, in the setup phase, the user encrypts the dataset and sends them to the public cloud. The cloud randomly initializes weights and biases (in the initialization phase, the weights and biases can be plaintexts). Next, we introduce the iterative computing phase carried out in the cloud. The goal of each iteration is to update the weights and biases. Note that (, , , , , ) denotes the ciphertext of (, , , , , , respectively). Each iteration consists of the following six steps:

Step 1. Cloud starts the iterative computation (here, (including in (8)) represents the encryption of a small random number, which has no effect on the correctness of the decryption):

Step 2. Cloud sends to the user. After decrypting and reencrypting them, the user sends the refresh ciphertext to the cloud for further computation.

Step 3. Cloud computes

Step 4. Cloud sends to the user. After decrypting and reencrypting them, the user sends the refresh ciphertext to the cloud for further computation.

Step 5. Cloud updates and :

Step 6. Cloud updates , ,In the abovementioned iteration, we choose the interaction between the cloud and the user to avoid high-cost bootstrapping. We will send the outputs of the hidden layer and the output layer to the user. After the user refreshes these ciphertexts, they will be sent to the cloud to continue the subsequent homomorphic operations. Because the outputs of the hidden layer and output layer are two ciphertext vectors, with a total of ciphertexts, the communication cost between the cloud and the user is not high. Through the analysis of noise in the later section, we can find that the advantage of this interactive protocol makes the noise of ciphertext in the process of homomorphic operations grow linearly after it reaches a certain value (i.e., e33).
In this process, it should also be noted that what the cloud sends to the user is not the true outputs of the hidden layer and output layer (i.e., and ), but the disturbed and . The idea is to prevent the user from snooping into the cloud to train the neural network.

4. Estimation

In this section, we show the parameters setting for BP and the CKKS scheme and analyze the estimation and implementation results.

4.1. Parameters Setting and Estimation Results
4.1.1. Parameters for the BP Algorithm

In the BP model, the numbers of input nodes and output nodes are determined, while the number of hidden nodes is uncertain. In fact, the number of hidden nodes has an impact on the performance of the neural network; an empirical formula can determine the number of hidden nodes as follows:where is the number of hidden nodes, d is the number of input nodes, m is the output nodes, and a is an adjustment constant between 0 and 10.

Weights are initialized as uniformly random values in the range . Feature values in each dataset are normalized between 0 and 1. The architecture and training parameters used in our secure neural network model are shown in Table 5, and we choose Iris, Diabetes, and Sonar datasets, which are from the University of California at Irvine (UCI) dataset repository [25]. The conventional BP learning network has the same parameters as the SecureBP algorithm.


DatasetNumber of samplesNumber of featuresNumber of hidden nodesLearning rate

Iris150440.6
Diabetes768850.5
Sonar2086090.4

4.1.2. Parameters for the CKKS Scheme

In the CKKS scheme, the coefficients of error polynomials are sampled from the discrete Gaussian distribution of standard deviation and a secret key is chosen randomly from the set of signed binary polynomials with the Hamming weight . We used the estimator of Albrecht et al. [35] to guarantee that the proposed parameter sets achieve at least 80 bit security level against the known attacks against the LWE problem.

We analyze the growth of noises in some ciphertexts, and Table 6 provides theoretical upper bounds on the noise growth during homomorphic operations. Note that denotes the noise of a fresh ciphertext.


The ciphertextThe 1st iterationThe i-th iteration


As can be seen from Table 6, the maximum size of growth noise during homomorphic operations is [36]; we choose parameters as follows: L = 10, N = 215, , and .

4.2. Estimation and Implementation Results

By carefully analyzing our SecureBP protocol, we calculate the number of homomorphic operations required for each step in the course of an iteration (as shown in Table 7).


StepEncDecMultCMultAdd

1000
2zz000
3000
4dd000
5003d4d5d
6003z
Total

From Table 7, we can see that the computation time required for an iteration is only related to the number of nodes in each layer. Combined with the time required for each homomorphic operation in [36], we give the estimation time (Table 8) of training SecureBP network homomorphically with Iris, Diabetes, and Sonar datasets, and Table 9 shows the accuracy comparison of encrypted and unencrypted BP networks in the case of 10 and 23 iterations, respectively.


DatasetEnc (ms)Dec (ms)Mult (ms)CMult (ms)Add (ms)Total (ms)

Iris522396068836167.77632.7
Diabetes81246.878721012219.39962.1
Sonar406078209921716580.52.0993 × 108


DatasetNumber of iterationsError rate of SecureBP (%)Error rate of conventional BP (%)

Iris1066.6766.67
2320.4016.67

Diabetes1036.9734.71
2334.9033.89

Sonar1021.4518.26
2317.7717.21

4.3. Efficiency and Accuracy Discussion

There are still some limitations in the application of our evaluation model to an arbitrary dataset. On the one hand, the HE system is a promising solution for the privacy issue, but its efficiency in real applications remains an open question. In other words, one constraint in our approach is that the efficiency of the SecureBP network is limited by the efficiency of homomorphic operations. On the other hand, we find that the accuracy of the network model is positively correlated with the degree of approximate polynomials. However, the higher degree of polynomial means the more homomorphic operations, the more time it takes to train the network model. Therefore, we need the tradeoff between the training efficiency and accuracy of the model.

5. Conclusion and Future Work

In this paper, we present a SecureBP network model for homomorphic training. We introduce two methods, more accuracy polynomial approximation and lightweight interactive protocol, to solve the difficulties encountered when the CKKS scheme is used to protect the BP network, and our method has a good experimental performance on different datasets. For future work, we plan to explore how to train the deep neural network and the convolutional neural network effectively on encrypted data in a training-as-a-service setting.

Data Availability

All types of data used to support the findings of this study have been deposited in the University of California at Irvine (UCI) Machine Learning Repository (http://archive.ics.uci.edu/ml/datasets.html).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported the National Natural Science Foundation of China under Grant no. 61672030.

References

  1. A. Esteva, B. Kuprel, R. A. Novoa et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017. View at: Publisher Site | Google Scholar
  2. V. Gulshan, L. Peng, M. Coram et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA, vol. 316, no. 22, pp. 2402–2410, 2016. View at: Publisher Site | Google Scholar
  3. F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: a unified embedding for face recognition and clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823, Boston, MA, USA, June 2015. View at: Publisher Site | Google Scholar
  4. M. Barni, C. Orlandi, and A. Piva, “A privacy-preserving protocol for neural-network based computation,” in Proceedings of the 8th Workshop on Multimedia and Security, pp. 146–151, ACM, Geneva, Switzerland, 2006. View at: Publisher Site | Google Scholar
  5. T. Chen and S. Zhong, “Privacy-preserving backpropagation neural network learning,” IEEE Transactions on Neural Networks, vol. 20, no. 10, pp. 1554–1564, 2009. View at: Publisher Site | Google Scholar
  6. C. Orlandi, A. Piva, and M. Barni, “Oblivious neural network computing via homomorphic encryption,” EURASIP Journal on Information Security, vol. 2007, no. 1, Article ID 037343, 2007. View at: Publisher Site | Google Scholar
  7. A. Piva, C. Orlandi, M. Caini, T. Bianchi, and M. Barni, “Enhancing privacy in remote data classification,” in IFIP International Information Security Conference, pp. 33–46, Springer, Boston, MA, USA, 2008. View at: Publisher Site | Google Scholar
  8. R. L. Rivest, L. Adleman, and M. L. Dertouzos, “On data banks and privacy homomorphisms,” Foundations of Secure Computation, Academia Press, Ghent, Belgium, 1978. View at: Google Scholar
  9. C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Proceedings of the 41st Annual ACM Symposium on Symposium on Theory of Computing-Stoc’09, vol. 9, Bethesda, MY, USA, June 2009. View at: Publisher Site | Google Scholar
  10. Z. Brakerski, C. Gentry, and V. Vaikuntanathan, “(Leveled) fully homomorphic encryption without bootstrapping,” ACM Transactions on Computation Theory, vol. 6, no. 3, pp. 1–36, 2014. View at: Publisher Site | Google Scholar
  11. Z. Brakerski and V. Vaikuntanathan, “Efficient fully homomorphic encryption from (standard) $\mathsf{LWE}$,” SIAM Journal on Computing, vol. 43, no. 2, pp. 831–871, 2014. View at: Publisher Site | Google Scholar
  12. J. H. Cheon and D. Stehl´e, “Fully homomophic encryption over the integers revisited,” in Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 513–536, Springer, Sofia, Bulgaria, April 2015. View at: Publisher Site | Google Scholar
  13. C. Gentry, S. Halevi, and N. P. Smart, “Homomorphic evaluation of the AES circuit,” in Advances in Cryptology—Crypto 2012, Lecture Notes in Computer Science, vol. 7417, pp. 850–867, Springer, Berlin, Germany, 2012. View at: Publisher Site | Google Scholar
  14. C. Gentry, A. Sahai, and B. Waters, “Homomorphic encryption from learning with errors: conceptually-simpler, asymptotically-faster, attribute-based,” in Advances in Cryptology—CRYPTO 2013, vol. 8042, pp. 75–92, Springer, Berlin, Germany, 2013. View at: Publisher Site | Google Scholar
  15. M. Van Dijk, C. Gentry, S. Halevi, and V. Vaikuntanathan, “Fully homomorphic encryption over the integers,” in Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 24–43, Springer, Monaco and Nice, France, May 2010, Advances in Cryptology-EUROCRYPT. View at: Publisher Site | Google Scholar
  16. S. Halevi and V. Shoup, “Algorithms in HElib,” in Proceedings of the Annual Cryptology Conference, pp. 554–571, Springer, Santa Barbara, CA, USA, August 2014. View at: Publisher Site | Google Scholar
  17. H. Chen, K. Laine, and R. Player, “Simple encrypted arithmetic library—SEAL v2.1,” in Proceedings of the International Conference on Financial Cryptography and Data Security, vol. 10323, pp. 3–18, Springer, 2017. View at: Publisher Site | Google Scholar
  18. Z. Brakerski, C. Gentry, and V. Vaikuntanathan, “(Leveled) fully homomorphic encryption without bootstrapping,” in Proceedings of the Innovations in Theoretical Computer Science 2012, pp. 309–325, Cambridge, MA, USA, January 2012. View at: Publisher Site | Google Scholar
  19. L. Ducas and D. Micciancio, “FHEW: bootstrapping homomorphic encryption in less than a second,” in Advances in Cryptology—EUROCRYPT 2015, Part I, Lecture Notes in Computer Science, vol. 9056, pp. 617–640, Springer, Berlin, Germany, 2015. View at: Publisher Site | Google Scholar
  20. I. Chillotti, N. Gama, M. Georgieva, and M. Izabach`ene, “Faster fully homomorphic encryption: bootstrapping in less than 0.1 seconds,” in Advances in Cryptology—ASIACRYPT 2016, Part I, pp. 3–33, Springer, Berlin, Germany, 2016. View at: Publisher Site | Google Scholar
  21. I. Chillotti, N. Gama, M. Georgieva, and M. Izabach`ene, “Faster packed homomorphic operations and efficient circuit bootstrapping for TFHE,” in Advances in Cryptology—ASIACRYPT 2017, Part I, vol. 10624, pp. 377–408, Springer, Cham, Switzerland, 2017. View at: Publisher Site | Google Scholar
  22. J. H. Cheon, K. Han, A. Kim, M. Kim, and Y. Song, “Bootstrapping for approximate homomorphic encryption,” in Advances in Cryptology—EUROCRYPT 2018, Proceedings, Part I, pp. 360–384, Springer, Cham, Switzerland, 2018. View at: Publisher Site | Google Scholar
  23. J. H. Cheon, A. Kim, M. Kim, and Y. S. Song, “Homomorphic encryption for arithmetic of approximate numbers,” in Advances in Cryptology—ASIACRYPT 2017, Part I, vol. 10624, pp. 409–437, Springer, Cham, Switzerland, 2017. View at: Publisher Site | Google Scholar
  24. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” Tech. Rep., California Univ San Diego La Jolla Inst for Cognitive Science, San Diego, CA, USA, 1985, Technical Report. View at: Google Scholar
  25. C. L. Blake, UCI Repository of Machine Learning Databases, University of California, Irvine, CA, USA, 1998, http://www.ics.uci.edu/mlearn/MLRepository.html.
  26. F. Bourse, M. Minelli, M. Minihold, and P. Paillier, “Fast homomorphic evaluation of deep discretized neural networks,” in Advances in Cryptology—CRYPTO 2018, Proceedings, Part III, Lecture Notes in Computer Science, vol. 10993, pp. 483–512, Springer, Cham, Switzerland, 2018, https://search.crossref.org/?q=Fast+homomorphic+evaluation+of+deep+discretized+neural+networks. View at: Publisher Site | Google Scholar
  27. C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan, “GAZELLE: a low latency framework for secure neural network inference,” in Proceedings of the USENIX Security 2018, pp. 1651–1669, Baltimore, MD, USA, 2018. View at: Google Scholar
  28. M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar, “Chameleon: a hybrid secure computation framework for machine learning applications,” in Proceedings of the Asia Conference on Computer and Communications Security, pp. 707–721, Incheon, Korea, June 2018. View at: Publisher Site | Google Scholar
  29. B. D. Rouhani, M. S. Riazi, and F. Koushanfar, “DeepSecure: scalable provably-secure deep learning,” in Proceedings of the 55th ACM/ESDA/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, June 2018. View at: Publisher Site | Google Scholar
  30. R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, “Cryptonets: applying neural networks to encrypted data with high throughput and accuracy,” in Proceedings of the International Conference on Machine Learning, pp. 201–210, Newyork, NY, USA, June 2016. View at: Google Scholar
  31. A. Kim, Y. Song, M. Kim, K. Lee, and J. H. Cheon, “Logistic regression model training based on the approximate homomorphic encryption,” BMC Medical Genomics, vol. 11, no. S4, 2018. View at: Publisher Site | Google Scholar
  32. J. W. Bos, K. Lauter, and M. Naehrig, “Private predictive analysis on encrypted medical data,” Journal of Biomedical Informatics, vol. 50, pp. 234–243, 2014. View at: Publisher Site | Google Scholar
  33. P. Mohassel and Y. Zhang, “SecureML: a system for scalable privacypreserving machine learning,” in Proceedings of the Symposium on Security and Privacy, pp. 19–38, San Jose, CA, USA, May 2017. View at: Publisher Site | Google Scholar
  34. J. Yuan and S. Yu, “Privacy preserving back-propagation neural network learning made practical with cloud computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 1, pp. 212–221, 2014. View at: Publisher Site | Google Scholar
  35. M. R. Albrecht, R. Player, and S. Scott, “On the concrete hardness of learning with errors,” Journal of Mathematical Cryptology, vol. 9, no. 3, pp. 169–203, 2015. View at: Publisher Site | Google Scholar
  36. J. H. Cheon, K. Han, A. Kim, M. Kim, and Y. Song, “A full RNS variant of approximate homomorphic encryption,” in Selected Areas in Cryptography—SAC, vol. 11349, pp. 347–368, Springer, Cham, Switzerland, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Qinju Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views498
Downloads167
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.