TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

Authors: Amartya Sanyal, Matt Kusner, Adria Gascon, Varun Kanade

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experimental Results In this section we report encrypted binary neural network prediction experiments on a number of real-world datasets. We begin by comparing the efficiency of the two circuits used for inner product, the reduce tree and the sorting network. We then describe the datasets and the architecture of the BNNs used for classification. We report the classification timings of these BNNs for each dataset, for different computational settings. Finally, we give accuracies of the BNNs compared to floating point networks.
Researcher Affiliation Academia 1University of Oxford, Oxford, UK 2The Alan Turing Institute, London, UK 3University of Warwick, Coventry, UK.
Pseudocode Yes Algorithm 1 Comparator Inputs: Encrypted B[ S], unencrypted B[(d b)/2], size d of B[(d b)/2],B[ S] Output: Result of 2 S d b 1: o = 0 2: for i = 1, . . . , d do 3: if B[(d b)/2]i = 0 then 4: o = MUX(B[ S]i, 1, o) 5: else 6: o = MUX(B[ S]i, o, 0) 7: end if 8: end for 9: Return: o
Open Source Code Yes Our code is freely available at (tap, 2018). TAPAS tricks for accelerating (encrypted) prediction as a service. https://github.com/amartya18x/ tapas, 2018.
Open Datasets Yes Cancer. The Cancer dataset1 contains 569 data points... 1https://tinyurl.com/gl3yhzb Diabetes. This dataset2 contains data on 100000 patients with diabetes... 2https://tinyurl.com/m6upj7y MNIST. The images in MNIST are 28 28 binary images. The training set and testing sets in this case are already available in a standard split and that is what we use.
Dataset Splits Yes Similar to Meehan et al. (2018) we divide the dataset into a training set and a test in a 70 : 30 ratio. We divide patients into a 80/20 train/test split.
Hardware Specification Yes computed with Intel Xeon CPUs @ 2.40GHz, processor number E5-2673V3
Software Dependencies No No specific software dependencies with version numbers are mentioned in the paper.
Experiment Setup No The paper describes network architectures (e.g., 'a single fully connected layer 90 1 followed by a batch normalization layer' for Cancer, '5 convolutional layers, each of which is followed by a batch normalization layer and a SIGN activation function' for Faces). However, it does not provide specific hyperparameter values like learning rate, batch size, or optimizer settings.