Partially Encrypted Deep Learning using Functional Encryption
Authors: Théo Ryffel, David Pointcheval, Francis Bach, Edouard Dufour-Sans, Romain Gay
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the practicality of our approach using a dataset inspired from MNIST [27], which is made of images of digits written using two different fonts. We show how to perform classification of the encrypted digit images in less than 3 seconds with over 97.7% accuracy while making the font prediction a hard task for a whole set of adversaries. In Figures 7 and 8 we show that the output size has an important influence on the two tasks performances. Table 1 shows that encryption time is longer than evaluation time, but a single encryption can be used with several decryption keys dkqi to perform multiple evaluation tasks. |
| Researcher Affiliation | Academia | Théo Ryffel1, 2, Edouard Dufour-Sans1, Romain Gay1,3, Francis Bach2, 1 and David Pointcheval1, 2 1Département d informatique de l ENS, ENS, CNRS, PSL University, Paris, France 2INRIA, Paris, France 3University of California, Berkeley |
| Pseudocode | Yes | Figure 2: Our functional encryption scheme for quadratic polynomials. |
| Open Source Code | Yes | All code and implementations can be found online at github.com/ La Riffle/collateral-learning and github.com/edufoursans/reading-in-the-dark. |
| Open Datasets | Yes | We demonstrate the practicality of our approach using a dataset inspired from MNIST [27], which is made of images of digits written using two different fonts. Yann Le Cun and Corinna Cortes. MNIST handwritten digit database. 2010. |
| Dataset Splits | Yes | We can observe this through the respective accuracies as it is shown in Figure 5, where the main and adversarial networks are CNNs as in Section 3.3 with 10 epochs of training using 7-fold cross validation. |
| Hardware Specification | Yes | Table 1: Average runtime for the FE scheme using a 2,7 GHz Intel Core i7 and 16GB of RAM. |
| Software Dependencies | No | The paper mentions software like 'charm [3]', 'pytorch', and 'sklearn library [33]' but does not provide specific version numbers for these components to ensure reproducibility. |
| Experiment Setup | Yes | For this experiment, we use α = 1.7 as detailed in Appendix C.2, the adversary uses the same CNN as stated above and the main network is a simple feed forward network (FFN) with 4 layers. Figure 6: Our semi-adversarial training scheme. |