Invariant Representations without Adversarial Training

Authors: Daniel Moyer, Shuyang Gao, Rob Brekelmans, Aram Galstyan, Greg Ver Steeg

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance on of our proposed invariance penalty on two datasets with a fair classification task. We also demonstrate Fader Network -like capabilities for manipulating specified factors in generative modeling on the MNIST dataset.
Researcher Affiliation Academia Information Sciences Institute University of Southern California {moyerd, gaos, brekelma}@usc.edu {gregv, galstyan}@isi.edu
Pseudocode No The paper describes methods and derivations but does not include any structured pseudocode or algorithm blocks (e.g., labeled 'Algorithm 1').
Open Source Code No The paper does not provide any concrete access to source code (e.g., a specific repository link, an explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes Both datasets are from the UCI repository. The preprocessing for both datasets follow Zemel et al. 2013[22], which is also the source for the pre-processing in our baselines [15, 21]. The first dataset is the German dataset... The second dataset is the Adult dataset... We demonstrate a form of unsupervised image manipulation... on the MNIST dataset.
Dataset Splits Yes Optimization and parameter tuning is done via a held-out validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Adam' optimizer and implies the use of deep learning frameworks, but it does not provide specific version numbers for any ancillary software dependencies (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes We use a latent space of 30 dimensions for each case. We train using Adam using the same hyperparameter settings as in Xie et al., and a batch size of 128.