Research Article  Open Access
Qinju Liu, Xianhui Lu, Fucai Luo, Shuai Zhou, Jingnan He, Kunpeng Wang, "SecureBP from Homomorphic Encryption", Security and Communication Networks, vol. 2020, Article ID 5328059, 9 pages, 2020. https://doi.org/10.1155/2020/5328059
SecureBP from Homomorphic Encryption
Abstract
We present a secure backpropagation neural network training model (SecureBP), which allows a neural network to be trained while retaining the confidentiality of the training data, based on the homomorphic encryption scheme. We make two contributions. The first one is to introduce a method to find a more accurate and numerically stable polynomial approximation of functions in a certain interval. The second one is to find a strategy of refreshing ciphertext during training, which keeps the order of magnitude of noise at .
1. Introduction
Driven by massive amounts of data and the high scalability, versatility, and high efficiency of cloud computing, modern machine learning (ML) has been widely used in many fields, including health care, military, and finance [1ā3]. These fields often contain a large amount of sensitive data, so how to protect the data privacy while using them becomes a very important problem. At present, there exist various approaches that can be used to protect data privacy. Differential privacy (DP), secure multiparty computation (MPC), and homomorphic encryption (HE) are the most widely used methods for this problem.
DP allows one to control the amount of information leaked from an individual record in a dataset. By using DP, one can ensure privacy for any entity whose information is contained in the dataset as well as to create models that do not leak this information about the data they were trained on. Therefore, DP is mainly used in the training process. However, we are more concerned about how to use cryptographic methods to protect data privacy.
Most MPC methods establish a communication protocol among the parties involved such that if the parties follow the protocol, then they will end with the desired results while protecting the security and privacy of their respective assets [4ā7]. However, due to the large scale of data used in machine learning, the communication cost of MPC is very high.
HE is also another major method to protect data privacy, which allows us to perform certain arithmetic operations on encrypted data without decryption. Fully homomorphic encryption (FHE) (it allows us to perform arbitrarily complex and efficiently computable evaluations over encrypted data without decrypting them) was originally introduced by Rivest et al. in 1978 [8]. But it had been an open problem until Gentry presented the first plausible candidate FHE construction based on ideal lattices in 2009 [9]. Since then, a series of works [10ā15] have been proposed to improve the security assumptions and efficiency of FHE, following Gentryās blueprint. Currently, some public libraries are available (Table 1), namely, HElib [16] and SEAL [17] based on BGV scheme [18] and FHEW [19] and TFHE [20, 21] based on GSW scheme [14] and HEAAN [22] based on CKKS scheme [23]. The BGVbased schemes can handle a lot of bits at the same time, so they can pack and batch many operations in the SIMD manner. However, the set of operations that are efficient with BGV depends on the section of the parameter set. The GSWbased schemes can use Boolean circuits to deal with nonlinear operations quickly, but their computational efficiency of arithmetic operations is relatively low. The CKKSbased schemes can perform efficient approximate arithmetic operations on encrypted data by introducing a novel encoding technique and a fast rescale operation, but they cannot deal with nonpolynomial operations. It is widely used in machine learning due to its high efficiency in arithmetic operations (which is why we chose the CKKS scheme for our SecureBP model).

Two important usecases for machine learning models are predictionsasaservice (PaaS) setting and trainingasaservice (TaaS) setting. In the PaaS setting, a large organization (or the cloud) uses its proprietary data to train machine learning models. The organization now hopes to monetize the model by deploying services that allow users to upload their inputs and receive predictions for price. In the TaaS setting, the organization makes profits by deploying services that allow users to upload their encrypted inputs and receive the encrypted machine learning model. Moreover, in this setting, since the process of training an encrypted model is time and resourceconsuming, the techniques and proprietary tools for the training algorithm are often considered critical intellectual property by its owner, who is typically not willing to share them.
BP [24] is one of the most classical and widely used neural network models. It is more powerful than linear regression and logistic regression models. Moreover, the BP network already has the basic module of deep neural network (DNN); in other words, the BP network is the cornerstone of DNN. Therefore, when we study the data privacy protection of machine learning, it is appropriate to take the BP network model as the breakthrough point.
1.1. Our Contributions
In this paper, we present a secure backpropagation neural network model (SecureBP) based on HE. In this model, in a setup phase, the data owner (user) encrypts his data and sends them to the cloud. In the computation phase, the cloud can train the model on the encrypted data without learning any information beyond the ciphertext of data. Technically, we have two main contributions: a more accurate polynomial approximation technique and a lightweight interactive scheme to refresh ciphertexts during training.
We focus on the TaaS setting in this paper and we choose HE (i.e., CKKS scheme) as the method to protect userās data. For clarity, let us review the technical challenges and difficulties of using HE for the BP network in the TaaS setting. Firstly, in the BP network, each node is activated before output by an activation function, which is usually selected by nonpolynomial functions, such as sigmoid, tangethyperbolic (tanh), or rectified linear unit (ReLU). However, most existing HE schemes is that they only support polynomial arithmetic operations. The evaluation of the activation function is an obstacle for the homomorphic implementation of the BP network since it cannot be expressed as a polynomial. In addition, in order to ensure security, HE introduces some noise in encryption, and the noise increases as the homomorphic computation proceeds. When the noise reaches a certain threshold, the decryption error will occur. Therefore, in view of the abovementioned technical difficulties, we make the following two contributions.
The first contribution is that by using Chebyshev polynomials (in fact, several studies have suggested this approach, but none have examined it in detail), we introduce a more accurate polynomial approximation of sigmoid function for a certain interval. Compared with Taylor polynomials, our method causes more similarities of derivatives with the sigmoid function (see Section 3.1).
The second contribution is that we propose a lightweight interaction protocol, which is a novel strategy to refresh ciphertext during training. The trivial way to deal with the growing noise is bootstrapping. However, bootstrapping comes with high computational overhead. To avoid costly bootstrapping of HE, we present the lightweight interaction protocol during training. By this method, on the one hand, no technical information of the cloud training model is provided to the user. On the other hand, the noise of weight ciphertext grows linearly after it grows to a certain value.
Now that the basic ingredients are in place, we construct our SecureBP network. To demonstrate the feasibility of our SecureBP, we estimate its performance on three datasets: Iris dataset, Diabetes dataset and Sonar dataset (see Table 2), which are from the University of California at Irvine (UCI) dataset repository [25].

1.2. Related Work
Before the current work, there have been some researches on privacypreserving machine learning algorithm [26ā29]. These papers propose solutions based on MPC and HE techniques (see Table 3), but they appear to incur some problems.
Privacypreserving machine learning via MPC provides a promising solution by allowing different parties to train various models on their joint data without revealing any information beyond the outcome. They require interactivity between the party that holds the data and the party that performs the blind classification. Even though practical performances of MPCbased solutions have been impressive compared to FHEbased solutions, they incur other issues such as network latency and high bandwidth usage. Because of these downsides, HEbased solutions seem more scalable for reallife applications.
Privacypreserving machine learning based on HE is more challenging. As we mentioned before, the standard activation function is a challenge in applying HE to the machine learning algorithm. Faced with this challenge, Ran GiladBachrach et al. [30] propose a solution (CryptoNets) where instead of standard activation function, they use a square function. Homomorphic computation depends on the total number of levels required to implement the network and results in a relatively high computational overhead which bounds CryptoNets practicability in resourcelimited settings where the data owners have severe computational constraints. Moreover, the inherent limitation of most existing HE constructions is that they only support the arithmetic operations over modular spaces. Therefore, their approaches required the size of parameter for real number operations (i.e., no modular reduction over plaintext space) which is too large to be practically implemented.
1.3. Organization
Section 2 briefly introduces some notations and reviews the framework of BP. Section 3 describes our SecureBP model. In section 4, we estimate our model and discuss the estimation and implementation results.
2. Preliminaries
2.1. Notations
All logarithms are base 2 unless otherwise indicated. During homomorphic operations, we use to denote the multiplication between ciphertexts; denotes the addition between ciphertexts and denotes the scalar multiplication between a constant and a ciphertext.
Next, we introduce some signs used in the BP network:(i), the input value.(ii), the output value of hidden layer.(iii), the output value of output layer.(iv), the weight connecting the hiddenlayer jth node and the inputlayer ith node.(v), the weight connecting the outputlayer kth node and the hiddenlayer jth node.(vi), the bias of hiddenlayer jth node.(vii), the bias of outputlayer kth node.(viii)L, the learning rate.
2.2. The Framework of BP
In this subsection, we give a brief review of one version of the BP network. For ease of presentation, in this paper, we only consider a neural network of three layers (input layer, hidden layer, output layer). It is trivial to extend our work to the multilayers network. This configuration can be seen from Figure 1.
In the BP algorithm, there is one forward phase and one backward phase during each iteration. Then, the whole BP algorithm can be described in Algorithm 1.

The forward phase starts from the input layer and approaches the output layer. During this phase, weighted sums and activations are computed for every node in each layer using the activation function, which is normally the sigmoid function. That is,
The backward phase starts from the output layer and descends toward the bottom layer (i.e., the input layer) of the network to compute gradients. Finally, we need to update the weights ( ) and biases (, ) using the computed gradients. The rules for updating are as follows:where and .
3. SecureBP Based on CKKS Scheme
In this section, we explain how to securely train the BP network model using the CKKS scheme.
3.1. A Decent Polynomial Approximation
In the preceding update formula, except for activation function inside neurons (i.e., sigmoid function ), all other operations in BP network are addition and multiplication, so they can be implemented over encrypted data. One limitation of the existing HE schemes is that they only support polynomial arithmetic operations. The evaluation of the activation function is an obstacle for the implementation of the BP network since it cannot be expressed as a polynomial. Hence, in order to operate a complete BP neural network over encrypted data, we replace the sigmoid function with polynomial approximations that are compatible with practical HE schemes.
Actually, the Taylor polynomials have been commonly used for approximation of the sigmoid function [32ā34]:
However, we observe that the size of error grows rapidly as increases. Besides, in order to guarantee the accuracy of the BP network, we have to use a higher degree Taylor polynomial, but it requires too many homomorphic multiplications to be practically implemented. In summary, although Taylor expansions are more convenient and easier to compute, the accuracy of estimation is not always consistent because it is a local approximation near a certain point. Therefore, we introduce another good candidate for approximation with better approximation ability to replace the sigmoid function: optimal and uniform polynomial approximation of . Not exactly, we find a polynomial function that minimizes the absolute value of the error between and within a given interval.
The Chebyshev polynomials are used to construct the optimal uniform approximation polynomials . The Chebyshev polynomials can be simply defined as for ,
From the abovementioned definition, we can get two important properties of Chebyshev polynomials. The first property is that we can get a recurrence of Chebyshev polynomials . The second is that the Chebyshev polynomial has n different zero points on the interval , i.e., . Then, we can get the important theorem in polynomial approximation as follows.
Theorem 1. Let be a continuous differentiable function on interval , be the interpolation polynomial, and its interpolation nodes are the zero points of Chebychev polynomial , then is the optimal and uniform polynomial approximation of on interval , and
Proof. Therefore, it can be seen from the abovementioned theorem that to find the optimal uniform approximation polynomial of on the interval , we only need to set the interpolation node of as the zero point of Chebychev polynomial . For the function on an interval , we can take the transformation , so that . Then, we can apply Theorem 1 to . We note that compared with Taylor polynomials, this method of polynomial approximation causes more similarities of derivatives with the sigmoid function, which might help produce a better model (see Figure 2).
To justify our claims, we compare the accuracy of the produced BP neural network model using different activation functions with the Iris dataset (see Table 4).

3.2. Our SecureBP Network Model
In this section, we explain how to perform the lightweight interactive protocol to refresh ciphertexts during the training phase. To be precise, we explicitly describe a full pipeline of the evaluation of the SecureBP. We adopt the same assumptions as in the previous section so that the whole database can be encrypted in m ciphertexts.
First of all, in the setup phase, the user encrypts the dataset and sends them to the public cloud. The cloud randomly initializes weights and biases (in the initialization phase, the weights and biases can be plaintexts). Next, we introduce the iterative computing phase carried out in the cloud. The goal of each iteration is to update the weights and biases. Note that (, , , , , ) denotes the ciphertext of (, , , , , , respectively). Each iteration consists of the following six steps:
Step 1. Cloud starts the iterative computation (here, (including in (8)) represents the encryption of a small random number, which has no effect on the correctness of the decryption):
Step 2. Cloud sends to the user. After decrypting and reencrypting them, the user sends the refresh ciphertext to the cloud for further computation.
Step 3. Cloud computes
Step 4. Cloud sends to the user. After decrypting and reencrypting them, the user sends the refresh ciphertext to the cloud for further computation.
Step 5. Cloud updates and :
Step 6. Cloud updates , ,In the abovementioned iteration, we choose the interaction between the cloud and the user to avoid highcost bootstrapping. We will send the outputs of the hidden layer and the output layer to the user. After the user refreshes these ciphertexts, they will be sent to the cloud to continue the subsequent homomorphic operations. Because the outputs of the hidden layer and output layer are two ciphertext vectors, with a total of ciphertexts, the communication cost between the cloud and the user is not high. Through the analysis of noise in the later section, we can find that the advantage of this interactive protocol makes the noise of ciphertext in the process of homomorphic operations grow linearly after it reaches a certain value (i.e., e^{33}).
In this process, it should also be noted that what the cloud sends to the user is not the true outputs of the hidden layer and output layer (i.e., and ), but the disturbed and . The idea is to prevent the user from snooping into the cloud to train the neural network.
4. Estimation
In this section, we show the parameters setting for BP and the CKKS scheme and analyze the estimation and implementation results.
4.1. Parameters Setting and Estimation Results
4.1.1. Parameters for the BP Algorithm
In the BP model, the numbers of input nodes and output nodes are determined, while the number of hidden nodes is uncertain. In fact, the number of hidden nodes has an impact on the performance of the neural network; an empirical formula can determine the number of hidden nodes as follows:where is the number of hidden nodes, d is the number of input nodes, m is the output nodes, and a is an adjustment constant between 0 and 10.
Weights are initialized as uniformly random values in the range . Feature values in each dataset are normalized between 0 and 1. The architecture and training parameters used in our secure neural network model are shown in Table 5, and we choose Iris, Diabetes, and Sonar datasets, which are from the University of California at Irvine (UCI) dataset repository [25]. The conventional BP learning network has the same parameters as the SecureBP algorithm.

4.1.2. Parameters for the CKKS Scheme
In the CKKS scheme, the coefficients of error polynomials are sampled from the discrete Gaussian distribution of standard deviation and a secret key is chosen randomly from the set of signed binary polynomials with the Hamming weight . We used the estimator of Albrecht et al. [35] to guarantee that the proposed parameter sets achieve at least 80ābit security level against the known attacks against the LWE problem.
We analyze the growth of noises in some ciphertexts, and Table 6 provides theoretical upper bounds on the noise growth during homomorphic operations. Note that denotes the noise of a fresh ciphertext.

As can be seen from Table 6, the maximum size of growth noise during homomorphic operations is [36]; we choose parameters as follows: Lā=ā10, Nā=ā2^{15}, , and .
4.2. Estimation and Implementation Results
By carefully analyzing our SecureBP protocol, we calculate the number of homomorphic operations required for each step in the course of an iteration (as shown in Table 7).

From Table 7, we can see that the computation time required for an iteration is only related to the number of nodes in each layer. Combined with the time required for each homomorphic operation in [36], we give the estimation time (Table 8) of training SecureBP network homomorphically with Iris, Diabetes, and Sonar datasets, and Table 9 shows the accuracy comparison of encrypted and unencrypted BP networks in the case of 10 and 23 iterations, respectively.


4.3. Efficiency and Accuracy Discussion
There are still some limitations in the application of our evaluation model to an arbitrary dataset. On the one hand, the HE system is a promising solution for the privacy issue, but its efficiency in real applications remains an open question. In other words, one constraint in our approach is that the efficiency of the SecureBP network is limited by the efficiency of homomorphic operations. On the other hand, we find that the accuracy of the network model is positively correlated with the degree of approximate polynomials. However, the higher degree of polynomial means the more homomorphic operations, the more time it takes to train the network model. Therefore, we need the tradeoff between the training efficiency and accuracy of the model.
5. Conclusion and Future Work
In this paper, we present a SecureBP network model for homomorphic training. We introduce two methods, more accuracy polynomial approximation and lightweight interactive protocol, to solve the difficulties encountered when the CKKS scheme is used to protect the BP network, and our method has a good experimental performance on different datasets. For future work, we plan to explore how to train the deep neural network and the convolutional neural network effectively on encrypted data in a trainingasaservice setting.
Data Availability
All types of data used to support the findings of this study have been deposited in the University of California at Irvine (UCI) Machine Learning Repository (http://archive.ics.uci.edu/ml/datasets.html).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This research was supported the National Natural Science Foundation of China under Grant no. 61672030.
References
 A. Esteva, B. Kuprel, R. A. Novoa et al., āDermatologistlevel classification of skin cancer with deep neural networks,ā Nature, vol. 542, no. 7639, pp. 115ā118, 2017. View at: Publisher Site  Google Scholar
 V. Gulshan, L. Peng, M. Coram et al., āDevelopment and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,ā JAMA, vol. 316, no. 22, pp. 2402ā2410, 2016. View at: Publisher Site  Google Scholar
 F. Schroff, D. Kalenichenko, and J. Philbin, āFacenet: a unified embedding for face recognition and clustering,ā in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815ā823, Boston, MA, USA, June 2015. View at: Publisher Site  Google Scholar
 M. Barni, C. Orlandi, and A. Piva, āA privacypreserving protocol for neuralnetwork based computation,ā in Proceedings of the 8th Workshop on Multimedia and Security, pp. 146ā151, ACM, Geneva, Switzerland, 2006. View at: Publisher Site  Google Scholar
 T. Chen and S. Zhong, āPrivacypreserving backpropagation neural network learning,ā IEEE Transactions on Neural Networks, vol. 20, no. 10, pp. 1554ā1564, 2009. View at: Publisher Site  Google Scholar
 C. Orlandi, A. Piva, and M. Barni, āOblivious neural network computing via homomorphic encryption,ā EURASIP Journal on Information Security, vol. 2007, no. 1, Article ID 037343, 2007. View at: Publisher Site  Google Scholar
 A. Piva, C. Orlandi, M. Caini, T. Bianchi, and M. Barni, āEnhancing privacy in remote data classification,ā in IFIP International Information Security Conference, pp. 33ā46, Springer, Boston, MA, USA, 2008. View at: Publisher Site  Google Scholar
 R. L. Rivest, L. Adleman, and M. L. Dertouzos, āOn data banks and privacy homomorphisms,ā Foundations of Secure Computation, Academia Press, Ghent, Belgium, 1978. View at: Google Scholar
 C. Gentry, āFully homomorphic encryption using ideal lattices,ā in Proceedings of the 41st Annual ACM Symposium on Symposium on Theory of ComputingStocā09, vol. 9, Bethesda, MY, USA, June 2009. View at: Publisher Site  Google Scholar
 Z. Brakerski, C. Gentry, and V. Vaikuntanathan, ā(Leveled) fully homomorphic encryption without bootstrapping,ā ACM Transactions on Computation Theory, vol. 6, no. 3, pp. 1ā36, 2014. View at: Publisher Site  Google Scholar
 Z. Brakerski and V. Vaikuntanathan, āEfficient fully homomorphic encryption from (standard) $\mathsf{LWE}$,ā SIAM Journal on Computing, vol. 43, no. 2, pp. 831ā871, 2014. View at: Publisher Site  Google Scholar
 J. H. Cheon and D. StehlĀ“e, āFully homomophic encryption over the integers revisited,ā in Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 513ā536, Springer, Sofia, Bulgaria, April 2015. View at: Publisher Site  Google Scholar
 C. Gentry, S. Halevi, and N. P. Smart, āHomomorphic evaluation of the AES circuit,ā in Advances in CryptologyāCrypto 2012, Lecture Notes in Computer Science, vol. 7417, pp. 850ā867, Springer, Berlin, Germany, 2012. View at: Publisher Site  Google Scholar
 C. Gentry, A. Sahai, and B. Waters, āHomomorphic encryption from learning with errors: conceptuallysimpler, asymptoticallyfaster, attributebased,ā in Advances in CryptologyāCRYPTO 2013, vol. 8042, pp. 75ā92, Springer, Berlin, Germany, 2013. View at: Publisher Site  Google Scholar
 M. Van Dijk, C. Gentry, S. Halevi, and V. Vaikuntanathan, āFully homomorphic encryption over the integers,ā in Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 24ā43, Springer, Monaco and Nice, France, May 2010, Advances in CryptologyEUROCRYPT. View at: Publisher Site  Google Scholar
 S. Halevi and V. Shoup, āAlgorithms in HElib,ā in Proceedings of the Annual Cryptology Conference, pp. 554ā571, Springer, Santa Barbara, CA, USA, August 2014. View at: Publisher Site  Google Scholar
 H. Chen, K. Laine, and R. Player, āSimple encrypted arithmetic libraryāSEAL v2.1,ā in Proceedings of the International Conference on Financial Cryptography and Data Security, vol. 10323, pp. 3ā18, Springer, 2017. View at: Publisher Site  Google Scholar
 Z. Brakerski, C. Gentry, and V. Vaikuntanathan, ā(Leveled) fully homomorphic encryption without bootstrapping,ā in Proceedings of the Innovations in Theoretical Computer Science 2012, pp. 309ā325, Cambridge, MA, USA, January 2012. View at: Publisher Site  Google Scholar
 L. Ducas and D. Micciancio, āFHEW: bootstrapping homomorphic encryption in less than a second,ā in Advances in CryptologyāEUROCRYPT 2015, Part I, Lecture Notes in Computer Science, vol. 9056, pp. 617ā640, Springer, Berlin, Germany, 2015. View at: Publisher Site  Google Scholar
 I. Chillotti, N. Gama, M. Georgieva, and M. Izabach`ene, āFaster fully homomorphic encryption: bootstrapping in less than 0.1 seconds,ā in Advances in CryptologyāASIACRYPT 2016, Part I, pp. 3ā33, Springer, Berlin, Germany, 2016. View at: Publisher Site  Google Scholar
 I. Chillotti, N. Gama, M. Georgieva, and M. Izabach`ene, āFaster packed homomorphic operations and efficient circuit bootstrapping for TFHE,ā in Advances in CryptologyāASIACRYPT 2017, Part I, vol. 10624, pp. 377ā408, Springer, Cham, Switzerland, 2017. View at: Publisher Site  Google Scholar
 J. H. Cheon, K. Han, A. Kim, M. Kim, and Y. Song, āBootstrapping for approximate homomorphic encryption,ā in Advances in CryptologyāEUROCRYPT 2018, Proceedings, Part I, pp. 360ā384, Springer, Cham, Switzerland, 2018. View at: Publisher Site  Google Scholar
 J. H. Cheon, A. Kim, M. Kim, and Y. S. Song, āHomomorphic encryption for arithmetic of approximate numbers,ā in Advances in CryptologyāASIACRYPT 2017, Part I, vol. 10624, pp. 409ā437, Springer, Cham, Switzerland, 2017. View at: Publisher Site  Google Scholar
 D. E. Rumelhart, G. E. Hinton, and R. J. Williams, āLearning internal representations by error propagation,ā Tech. Rep., California Univ San Diego La Jolla Inst for Cognitive Science, San Diego, CA, USA, 1985, Technical Report. View at: Google Scholar
 C. L. Blake, UCI Repository of Machine Learning Databases, University of California, Irvine, CA, USA, 1998, http://www.ics.uci.edu/mlearn/MLRepository.html.
 F. Bourse, M. Minelli, M. Minihold, and P. Paillier, āFast homomorphic evaluation of deep discretized neural networks,ā in Advances in CryptologyāCRYPTO 2018, Proceedings, Part III, Lecture Notes in Computer Science, vol. 10993, pp. 483ā512, Springer, Cham, Switzerland, 2018, https://search.crossref.org/?q=Fast+homomorphic+evaluation+of+deep+discretized+neural+networks. View at: Publisher Site  Google Scholar
 C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan, āGAZELLE: a low latency framework for secure neural network inference,ā in Proceedings of the USENIX Security 2018, pp. 1651ā1669, Baltimore, MD, USA, 2018. View at: Google Scholar
 M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar, āChameleon: a hybrid secure computation framework for machine learning applications,ā in Proceedings of the Asia Conference on Computer and Communications Security, pp. 707ā721, Incheon, Korea, June 2018. View at: Publisher Site  Google Scholar
 B. D. Rouhani, M. S. Riazi, and F. Koushanfar, āDeepSecure: scalable provablysecure deep learning,ā in Proceedings of the 55th ACM/ESDA/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, June 2018. View at: Publisher Site  Google Scholar
 R. GiladBachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, āCryptonets: applying neural networks to encrypted data with high throughput and accuracy,ā in Proceedings of the International Conference on Machine Learning, pp. 201ā210, Newyork, NY, USA, June 2016. View at: Google Scholar
 A. Kim, Y. Song, M. Kim, K. Lee, and J. H. Cheon, āLogistic regression model training based on the approximate homomorphic encryption,ā BMC Medical Genomics, vol. 11, no. S4, 2018. View at: Publisher Site  Google Scholar
 J. W. Bos, K. Lauter, and M. Naehrig, āPrivate predictive analysis on encrypted medical data,ā Journal of Biomedical Informatics, vol. 50, pp. 234ā243, 2014. View at: Publisher Site  Google Scholar
 P. Mohassel and Y. Zhang, āSecureML: a system for scalable privacypreserving machine learning,ā in Proceedings of the Symposium on Security and Privacy, pp. 19ā38, San Jose, CA, USA, May 2017. View at: Publisher Site  Google Scholar
 J. Yuan and S. Yu, āPrivacy preserving backpropagation neural network learning made practical with cloud computing,ā IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 1, pp. 212ā221, 2014. View at: Publisher Site  Google Scholar
 M. R. Albrecht, R. Player, and S. Scott, āOn the concrete hardness of learning with errors,ā Journal of Mathematical Cryptology, vol. 9, no. 3, pp. 169ā203, 2015. View at: Publisher Site  Google Scholar
 J. H. Cheon, K. Han, A. Kim, M. Kim, and Y. Song, āA full RNS variant of approximate homomorphic encryption,ā in Selected Areas in CryptographyāSAC, vol. 11349, pp. 347ā368, Springer, Cham, Switzerland, 2018. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Qinju Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.