HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture

Authors: Qian Lou, Lei Jiang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that, compared to state-of-the-art (SOTA) PPNNs, HEMET reduces the inference latency by 59.3% 61.2%, and improves the inference accuracy by 0.4% 0.5%.
Researcher Affiliation Academia Qian Lou 1 Lei Jiang 1 1Indiana University Bloomington. Correspondence to: Lei Jiang <jiang60@iu.edu>.
Pseudocode Yes Algorithm 1 HE-Friendly Network Architecture Search
Open Source Code No The paper refers to a third-party library: "SEAL. Microsoft SEAL (release 3.6). https://github. com/Microsoft/SEAL, November 2020. Microsoft Research, Redmond, WA." but does not provide any statement or link for the open-sourcing of their own methodology's code.
Open Datasets Yes We adopt the datasets of CIFAR-10 and CIFAR-100 to evaluate our proposed techniques, because they are the most complex datasets prior PPNNs can be evaluated on (Dathathri et al., 2019; 2020).
Dataset Splits No The paper states: "50K images are used for training, while 10K images are used for testing in CIFAR-10." It specifies training and testing splits but does not explicitly mention a validation set split.
Hardware Specification Yes We ran all PPNN inferences on a server-level hardware platform, which is equipped with an Intel Xeon Gold 5120 2.2GHz CPU with 56 cores and 256GB DRAM memory.
Software Dependencies No The paper mentions software like "Tensor Flow", "EVA compiler", and "Microsoft SEAL library (SEAL)", but does not provide specific version numbers for these components within the experimental setup description, except indirectly through a bibliography entry for SEAL.
Experiment Setup Yes By following (Dathathri et al., 2020), we set the initial scale of encrypted input message to 25-bit, and the scale of weight filters and masks to 15-bit. The coefficients of approximated activation layers and batch normalization layers are set to 10-bit.