Label Inference Attacks from Log-loss Scores
Authors: Abhinav Aggarwal, Shiva Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run experimental simulations on some real datasets to demonstrate the ease of running these attacks in practice. |
| Researcher Affiliation | Industry | Abhinav Aggarwal 1 Shiva Prasad Kasiviswanathan 1 Zekun Xu 1 Oluwaseyi Feyisetan 1 Nathanael Teissier 1 1Amazon. Correspondence to: Abhinav Aggarwal <aggabhin@amazon.com>, Shiva Prasad Kasiviswanathan <kasivisw@amazon.com>. |
| Pseudocode | Yes | Algorithm 1 Label Inference with No Noise in the FPA(φ) Model (Polynomial Adversary) |
| Open Source Code | Yes | For ensuring reproducibility, the entire experiment setup is submitted as part of the supplementary material. |
| Open Datasets | Yes | We evaluate our attacks on both simulated binary labelings and real binary classification datasets fetched from the UCI machine learning dataset repository9.9https://archive.ics.uci.edu/ml/machine-learning-databases |
| Dataset Splits | No | The paper mentions 'N is the number of test samples in the dataset' for real datasets but does not provide specific details on how these datasets were split into training, validation, or test sets, nor does it specify proportions or methodology for splitting. |
| Hardware Specification | Yes | All experiments are run on a 64-bit machine with 2.6GHz 6-Core processor, using the standard IEEE-754 double precision format (1 bit for sign, 11 bits for exponent, and 53 bits for mantissa). |
| Software Dependencies | No | The paper mentions 'standard IEEE-754 double precision format' but does not specify programming languages, libraries, or frameworks with version numbers (e.g., Python, PyTorch, etc.) that would be needed to replicate the experiment. |
| Experiment Setup | Yes | The setup is similar to the first row, except that a bounded noise of scale 0.01, 0.1, or 1 is added to the log-loss scores (the noise is from about 1% to 100% of the raw log-loss score). |