CoPriv: Network/Protocol Co-Optimization for Communication-Efficient Private Inference

Authors: Wenxuan Zeng, Meng Li, Haichuan Yang, Wen-jie Lu, Runsheng Wang, Ru Huang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare Co Priv with the SOTA 2PC protocol, Cryp TFlow2, and demonstrate 2.1 communication reduction for both Res Net-18 and Res Net32 on CIFAR-100. We also compare Co Priv with SOTA network optimization methods, including SNL, Meta Pruning, etc. Co Priv achieves 9.98 and 3.88 online and total communication reduction with a higher accuracy compared to SNL, respectively. Co Priv also achieves 3.87 online communication reduction with more than 3% higher accuracy compared to Meta Pruning.
Researcher Affiliation Collaboration Wenxuan Zeng Peking University zwx.andy@stu.pku.edu.cn Meng Li Peking University meng.li@pku.edu.cn Haichuan Yang Meta AI haichuan@meta.com Wen-jie Lu Ant Group juhou.lwj@antgroup.com Runsheng Wang Peking University r.wang@pku.edu.cn Ru Huang Peking University ruhuang@pku.edu.cn
Pseudocode Yes Algorithm 1: Network Re-parameterization for Inverted Residual Block
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a direct link to a code repository for its methodology.
Open Datasets Yes We apply Co Priv to Mobile Net V2 with different width multipliers on CIFAR-100 [30] and Image Net [9] datasets.
Dataset Splits No The paper mentions using CIFAR-100 and ImageNet datasets, but it does not provide specific training, validation, or test split percentages, sample counts, or explicit references to predefined standard splits.
Hardware Specification Yes All of our experiments are evaluated on the Intel Xeon Gold 5220R CPU @ 2.20GHz.
Software Dependencies No The paper mentions software like 'Cyp TFlow2', 'Eigen', and 'Armadillo matrix calculation library' but does not provide specific version numbers for these dependencies.
Experiment Setup Yes We first search and prune redundant Re LUs for 10 epochs and then finetune the pruned network for 180 epochs with stochastic gradient descent (SGD) optimizer [2], cosine learning scheduler and initial learning rate of 0.1.