Label-Only Model Inversion Attacks via Knowledge Transfer

Authors: Bao-Ngoc Nguyen, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man (Man) Cheung

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present extensive experiment results and ablation studies:
Researcher Affiliation Academia Ngoc-Bao Nguyen 1 Keshigeyan Chandrasegaran 2 Milad Abdollahzadeh1 Ngai-Man Cheung1 1Singapore University of Technology and Design (SUTD) 2Stanford University thibaongoc_nguyen@mymail.sutd.edu.sg ngaiman_cheung@sutd.edu.sg
Pseudocode No The paper describes its methods in prose and equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code, demo, models and reconstructed data are available at our project page: https://ngoc-nguyen-0.github.io/lokt/
Open Datasets Yes We use three datasets, namely Celeb A [40], Facescrub [41], and Pubfig83 [42].
Dataset Splits No The paper describes dividing datasets into private (Dpriv) and public (Dpub) sets for training but does not explicitly specify validation splits with percentages, sample counts, or methodologies like cross-validation.
Hardware Specification No The paper does not provide specific details regarding the hardware used for experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No While the paper describes the experimental setup in terms of datasets, models, and evaluation metrics, it does not explicitly provide concrete hyperparameter values or detailed training configurations (e.g., learning rates, batch sizes, number of epochs) for reproducing the experiments.