SAFENet: A Secure, Accurate and Fast Neural Network Inference
Authors: Qian Lou, Yilin Shen, Hongxia Jin, Lei Jiang
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show SAFENet obtains the state-of-the-art inference latency and performance, reducing latency by 38% 61% or improving accuracy by 1.8% 4% over prior techniques on various encrypted datasets. |
| Researcher Affiliation | Collaboration | Qian Lou1, Yilin Shen2, Hongxia Jin2, and Lei Jiang1 1{louqian, jiang60}@iu.edu 2{yilin.shen, hongxia.jin}@samsung.com 1Indiana University Bloomington, 2Samsung Research America |
| Pseudocode | Yes | Algorithm 1 Binary Tree Population Based Training BTPBT(MT , D, At, W) Input: A T-layer neural network MT with weight W, training data D, accuracy threshold At. Output: Activation approximation rate α[0 : T 1], polynomial degree β[0 : T 1], score S, accuracy A and weight W. 1: Construct a binary tree... |
| Open Source Code | No | The paper mentions using and citing open-source libraries like 'SEAL library by SEAL' and 'Multi-Protocol fancy-garbling library by Carmer et al. (2019)' with their respective links, but does not state that the code for SAFENet itself is open-source or provide a link to its implementation. |
| Open Datasets | Yes | Our secure inference experiments rely on CIFAR-10 and CIFAR-100 datasets. |
| Dataset Splits | No | Both CIFAR-10 and CIFAR-100 contain 50000 training images and 10000 testing images, where each image size is 32 32 3. The paper specifies training and testing images, but does not explicitly state a separate validation set split or its size/percentage from the training data. |
| Hardware Specification | Yes | We ran the secure inference models on two instances. They are equipped with an Intel Xeon E7-4850 CPU and 16 GB DRAM. The hyper-parameters optimization for activation replacement requires an NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The SEAL library by SEAL and Multi-Protocol fancy-garbling library by Carmer et al. (2019) are used to implement HE and garbled circuits functions. [...] The hyper-parameters selection is implemented in Python, and the BTPBT is constructed based on regular PBT by Jaderberg et al. (2017) in Tune platform by Liaw et al. (2018). |
| Experiment Setup | Yes | For PBT and our work, the evolution cycle between two exploitation is 20 iterations, and 50 workers are used to simultaneously search parameters. For BTPBT, each two sibling nodes and the root node require a PBT separately, therefore 2levels PBT is required. We set 20 epochs for each node s PBT. We use 40 epochs to jointly search the activation approximation. The total 200 epochs with 50 workers take 7 GPU hours. |