Low Latency Privacy Preserving Inference

Authors: Alon Brutzkus, Ran Gilad-Bachrach, Oren Elisha

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of our methods on several computer vision tasks.
Researcher Affiliation Collaboration 1Microsoft Research and Tel Aviv University, Israel 2Microsoft, Israel 3Microsoft Research, Israel.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Descriptions of procedures are given in narrative text.
Open Source Code Yes Our code is freely available at https://github.com/microsoft/Crypto Nets.
Open Datasets Yes Here we present private predictions on the MNIST data-set (Le Cun et al., 2010)... The Cifar-10 data-set (Krizhevsky & Hinton, 2009)... Cal Tech-101 dataset (Fei-Fei et al., 2006).
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits needed to reproduce the experiment for all datasets. For Cal Tech-101, it mentions "the first 20 where used for training and the other 10 examples where used for testing", which is a train/test split, but no specific validation split is stated for any dataset.
Hardware Specification Yes On the reference machine used for this work (Azure standard B8ms virtual machine with 8 v CPUs and 32GB of RAM)
Software Dependencies Yes We use version 2.3.1 of the SEAL, http://sealcrypto. org/
Experiment Setup Yes As a benchmark, we applied both to the same network that has accuracy of 98.95%. After suppressing adjacent linear layers it can be presented as a 5 5 convolution layer with a stride of (2, 2) and 5 output maps, which is followed by a square activation function that feeds a fully connected layer with 100 output neurons, another square activation and another fully connected layer with 10 outputs (in the supplementary material we include an image of the architecture).