Invariant Representations through Adversarial Forgetting
Authors: Ayush Jaiswal, Daniel Moyer, Greg Ver Steeg, Wael AbdAlmageed, Premkumar Natarajan4272-4279
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks. |
| Researcher Affiliation | Academia | Ayush Jaiswal, Daniel Moyer, Greg Ver Steeg, Wael Abd Almageed, Premkumar Natarajan Information Sciences Institute, University of Southern California {ajaiswal, gregv, wamageed, pnataraj}@isi.edu, moyerd@usc.edu |
| Pseudocode | No | The paper describes the framework's design and training process in text and diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of its source code, nor does it include links to a code repository. |
| Open Datasets | Yes | The proposed framework is evaluated on the d Sprites (Matthey et al. 2017) dataset of shapes with independent factors: color, shape, scale, orientation, and position. The Adult (Dheeru and Karra Taniskidou 2017) and German (Dheeru and Karra Taniskidou 2017) datasets have been popularly employed in fairness settings. |
| Dataset Splits | No | The paper mentions training and testing sets for various datasets (e.g., 'split into training and testing sets' for Chairs, 'one image from each s-category is used for training and the rest of the dataset is used for testing' for Extended Yale-B), but does not explicitly provide details for a separate validation split. |
| Hardware Specification | No | The paper mentions the software backend used ('Keras with Tensor Flow backend') but does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states that the model was 'implemented in Keras with Tensor Flow backend' and 'The Adam optimizer was used' but does not specify any version numbers for these software components or other dependencies. |
| Experiment Setup | Yes | The Adam optimizer was used with 10^-4 learning rate and 10^-4 decay. The hyperparameters ρ, λ, and δ were tuned through grid search in powers of 10. ...the weights of the discriminator and the rest of the model are updated in the frequency ratio of k : 1. We found k = 10 to work well in our experiments. |