Learning Latent Subspaces in Variational Autoencoders

Authors: Jack Klys, Jake Snell, Richard Zemel

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the utility of the learned representations for attribute manipulation tasks on both the Toronto Face [23] and Celeb A [15] datasets.
Researcher Affiliation Academia Jack Klys, Jake Snell, Richard Zemel University of Toronto Vector Institute {jackklys,jsnell,zemel}@cs.toronto.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., a specific repository link, an explicit statement of code release, or mention of code in supplementary materials) for the methodology described.
Open Datasets Yes The Toronto Faces Dataset [23] consists of approximately 120,000 grayscale face images partially labelled with expressions... Celeb A [15] is a dataset of approximately 200,000 images of celebrity faces with 40 labelled attributes.
Dataset Splits Yes This data was randomly split into a train, validation, and test set in 80%/10%/10% proportions (preserving the proportions of originally labelled data in each split).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Scikit-learn [19]' and 'Pytorch [18]' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes In practice we use Gaussian MLPs to represent distributions over relevant random variables... we choose Wi = R2 for all i. Hence we let µ1 = (0, 0), σ1 = (0.1, 0.1) and µ2 = (3, 3), σ2 = (1, 1). ... For Cond VAE and Cond VAE-info the points are chosen uniformly in the range [0, 3]. ... we follow the standard practice used in the literature, of setting pj = 1 for the models Cond VAE and Cond VAE-info and set pj to the empirical mean ESj h µj φ2 (x) i over the validation set for CSVAE in analogy with the other models.