Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release

Authors: Abhishek Singh, Praneeth Vepakomma, Vivek Sharma, Ramesh Raskar

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Sec 5 we experimentally demonstrate the feasibility of our framework. We evaluate different aspects of our proposed framework i) E1: comparison between different adversarial appraches, ii) E2: comparison with local differential privacy (LDP), iii) E3: computational tractability of our proposed framework, and iv) E4: investigating role of ARL in improving privacy. We use MNIST[34], FMNIST[64] and UTKFace[67] dataset for all experiments.
Researcher Affiliation Collaboration Abhishek Singh1, Praneeth Vepakomma1, Vivek Sharma1,2,3, , Ramesh Raskar1 1MIT Media Lab, 2MGH, Harvard Medical School, 3Sony AI
Pseudocode No The main paper states, 'We describe the algorithm procedurally in the supplementary material,' but no pseudocode or algorithm block is present in the main text.
Open Source Code Yes The source code and other details are available at tremblerz.github.io/posthoc.
Open Datasets Yes We use MNIST[34], FMNIST[64] and UTKFace[67] dataset for all experiments.
Dataset Splits No The paper mentions 'test set accuracy' and 'training dataset' but does not specify the explicit percentages or counts for training, validation, and test splits, nor does it reference predefined splits that include validation set proportions.
Hardware Specification No We report end-to-end runtime evaluation on a CPU-based client and achieve 2 sec/image (MNIST) and 3.5 sec/image (UTKFace).
Software Dependencies No We use Lip Mip[25] for computing Lipschitz constant over ℓ norm in the input space and ℓ1 norm in the output space.
Experiment Setup Yes We train obfuscator models with different values of α (weighing factor) for adversarial training. Our results in Fig 3 indicate that higher weighing of adversarial regularization reduces the local Lipschitz constant, hence reducing the local sensitivity of the neural network.