Glyph: Fast and Accurately Training Deep Neural Networks on Encrypted Data

Authors: Qian Lou, Bo Feng, Geoffrey Charles Fox, Lei Jiang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show Glyph obtains state-of-the-art accuracy, and reduces training latency by 69% 99% over prior FHE-based privacy-preserving techniques on encrypted datasets.
Researcher Affiliation Academia Qian Lou louqian@iu.edu Bo Feng fengbo@iu.edu Geoffrey C. Fox gcf@indiana.edu Lei Jiang jiang60@iu.edu Indiana University Bloomington
Pseudocode No The paper describes methods through textual explanation and diagrams, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes Our encrypted datasets include MNIST [22] and Skin-Cancer-MNIST [20]. ... We also used SVHN [23] and CIFAR-10 [24] to pre-train our models which are for transfer learning on encrypted datasets.
Dataset Splits No Skin-Cancer-MNIST consists of 10015 dermatoscopic images and includes a representative collection of 7 important diagnostic categories in the realm of pigmented lesions. We grouped it into a 8K training dataset and a 2K test dataset.
Hardware Specification Yes We evaluated all schemes on an Intel Xeon E78890 v4 2.2GHz CPU with 256GB DRAM. It has two sockets, each of which owns 12 cores and supports 24 threads.
Software Dependencies No The paper mentions using the HElib [7] library and the TFHE [9] library, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We adopted two network architectures, a 3-layer MLP [2] and a 4-layer CNN shown in Figure 4. ... We quantized the inputs, weights and activations of two network architectures with 8-bit by the training quantization technique in SWALP [25]. For BGV, we used the same parameter setting rule as [21]... We set the parameters of TFHE to the same security level as BGV... For first-level TLWE, we set the minimal noise standard variation to α = 6.10 10 5 and the count of coefficients to n = 280 to achieve the security level of λ = 80.