CryptoNAS: Private Inference on a ReLU Budget

Authors: Zahra Ghodsi, Akshaj Kumar Veldanda, Brandon Reagen, Siddharth Garg

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Crypto NAS improves accuracy by 3.4% and latency by 2.4 over the state-of-the-art. Crypto NAS is evaluated on CIFAR-10 and CIFAR-100 [18]. We see that Crypto NAS s Pareto frontier dominates all prior points for regions of interest.
Researcher Affiliation Academia Zahra Ghodsi, Akshaj Veldanda, Brandon Reagen, Siddharth Garg New York University {zg451, akv275, bjr5, sg175}@nyu.edu
Pseudocode Yes Algorithm 1 Crypto NAS Algorithm
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code for the described methodology or a link to a repository.
Open Datasets Yes Crypto NAS is evaluated on CIFAR-10 and CIFAR-100 [18].
Dataset Splits No The paper mentions using a 'validation set' in Algorithm 1, but does not provide specific percentages or sample counts for the training, validation, and test splits needed for reproduction. It only states that 'Datasets are preprocessed with image centering (subtracting mean and dividing standard deviation), and images are augmented for training using random horizontal flips, 4 pixel padding, and taking random crops.'
Hardware Specification Yes Experiments for latency are run on a 3 GHz Intel Xeon E5-2690 processor with 60GB of RAM, and networks are trained on Tesla P100 GPUs.
Software Dependencies No The paper mentions using the SEAL [16] library and the ABY library [17], but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Datasets are preprocessed with image centering (subtracting mean and dividing standard deviation), and images are augmented for training using random horizontal flips, 4 pixel padding, and taking random crops. We use Crypto NAS to discover three models with depth={6, 12, 24}, which we refer to as CNet1, CNet2 and CNet3.