Adversarially Learned Representations for Information Obfuscation and Inference

Authors: Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Galen Reeves, Guillermo Sapiro

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic datasets are shown in Section 4. In Section 5, we exemplify the use of this framework on real data through the following use-cases: Gender vs Emotion, where emotion is obfuscated from the filtered image, but gender can still be inferred; Subject vs Gender, where face images are trained to retain subject verification performance while obfuscating gender inference; and Subject vs Subject, where the goal is to allow subject verification only on a subset of consenting users, non-consenting user s identities are obfuscated and made hard to recover from the filtered images.
Researcher Affiliation Academia 1Duke University, Durham, North Carolina, USA. 2University College London, London, UK.
Pseudocode Yes Algorithm 1 Adversarial Information Obfuscation
Open Source Code Yes An implementation of this framework is available at www. github.com/Martin Bertran/AIOI.
Open Datasets Yes We conduct this experiment over the Celeb A dataset (Liu et al., 2015). We test this over the Face Scrub dataset (Kemelmacher Shlizerman et al., 2016)
Dataset Splits No The paper mentions training on datasets and evaluating on test sets, but specific percentages or counts for training, validation, and test splits are not explicitly provided in the main text.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., CPU/GPU models, memory) to run the experiments.
Software Dependencies No The paper mentions using specific network architectures like Xception and U-Net, but does not provide specific software dependencies with version numbers (e.g., TensorFlow 2.x, PyTorch 1.x).
Experiment Setup No Algorithm 1 lists 'hyperparameters (lr,λ,k)' as input, and mentions 'Detailed architectures for both networks are shown in Supplementary Material' for experimental setups, but specific values for hyperparameters like learning rate, batch size, or epochs are not provided in the main text.