Adversarially Regularized Autoencoders

Authors: Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, Yann LeCun

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments apply ARAE to discretized images and text sequences. The latent variable model is able to generate varied samples that can be quantitatively shown to cover the input spaces and to generate consistent image and sentence manipulations by moving around in the latent space via interpolation and offset vector arithmetic. When the ARAE model is trained with task-specific adversarial regularization, the model improves upon strong results on sentiment transfer reported in Shen et al. (2017) and produces compelling outputs on a topic transfer task using only a single shared space.
Researcher Affiliation Collaboration 1Department of Computer Science, New York University 2Facebook AI Research 3School of Engineering and Applied Sciences, Harvard University.
Pseudocode Yes Algorithm 1 ARAE Training
Open Source Code Yes Code is available at https://github.com/jakezhaojb/ARAE.
Open Datasets Yes We experiment with ARAE on three setups: (1) a small model using discretized images trained on the binarized version of MNIST, (2) a model for text sequences trained on the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015), and (3) a model trained for text transfer trained on the Yelp/Yahoo datasets for unaligned sentiment/topic transfer.
Dataset Splits No The paper mentions specific training set sizes for SNLI (e.g., '120k (Medium), 59k (Small), and 28k (Tiny)') but does not explicitly provide percentages or counts for validation splits for any of the datasets used.
Hardware Specification Yes We also thank the NVIDIA Corporation for the donation of a Titan X Pascal GPU that was used for this research.
Software Dependencies No The paper mentions tools like 'RNNs', 'MLP', and the 'fastText library' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For ARAE, we experimented with different λ(1) weighting on the adversarial loss (see section 4) with λ(1) a = 1, λ(1) b = 10. Both use λ(2) = 1.