The Fundamental Limits of Least-Privilege Learning

Authors: Theresa Stadler, Bogdan Kulynych, Michael Gastpar, Nicolas Papernot, Carmela Troncoso

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically validate our theoretical results. We demonstrate that the fundamental trade-off between a representation s utility for its intended task and the LPP applies to any feature representation regardless of the feature learning technique, model architecture, or dataset.
Researcher Affiliation Academia 1EPFL, Lausanne, Switzerland 2Lausanne University Hospital & University of Lausanne, Switzerland 3University of Toronto & Vector Institute, Toronto, Canada.
Pseudocode No No pseudocode or algorithm block found. The paper contains formal definitions, theorems, and proofs.
Open Source Code No No explicit statement or link providing access to the source code for the methodology described in the paper.
Open Datasets Yes In our main experiment, we use the LFWA+ image dataset which has multiple binary attribute labels for each image (Huang et al., 2008). ... We run a simple experiment on the Adult dataset (Kohavi & Becker, 2013)... The Texas Hospital Discharge dataset (Texas Department of State Health Services, Austin, Texas, 2013)...
Dataset Splits Yes The full dataset contains 13, 143 examples which we split in the following way: 20% of records are given to the adversary as an auxiliary dataset DA. The remaining 10, 514 records are split 80/20% across a train DT and evaluation set DE.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are provided. The paper describes model architectures and training parameters but not the underlying hardware.
Software Dependencies No The paper mentions software like Py Torch, SGD, Random Forest, and Tab Net but does not provide specific version numbers for any of these or other software dependencies.
Experiment Setup Yes Following Melis et al. (2019), we use a convolutional network with three spatial convolution layers with 32, 64, and 128 filters, kernel size set to (3, 3), max pooling layers with pooling size set to 2, followed by two fully connected layers of size 256 and 2. We use Re LU as the activation function for all layers. ... Training batch size is 32, SGD learning rate is 0.01.