Non-Adversarial Mapping with VAEs

Authors: Yedid Hoshen

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments In this section we evaluate the proposed VAE-NAM against NAM, which is at present the only unsupervised cross-domain mapping method that does not use adversarial training. We evaluate the visual quality of generations, accuracy of analogies and runtime during evaluation. 4.1 Quantitative Results We conduct several quantitative experiments to benchmark the relative performance of VAE-NAM.
Researcher Affiliation Industry Yedid Hoshen Facebook AI Research
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. The methods are described through mathematical formulations and textual explanations.
Open Source Code No The paper does not provide an explicit statement about releasing its own source code for VAE-NAM or a link to a code repository. It mentions using "the code supplied by the authors" for pre-training, referring to external code.
Open Datasets Yes We experimented with a variety of generative models and obtained the best performance with a standard DCGAN For the 32 × 32 and 64 × 64 resolution domains including Shoes-RGB and Handbags-RGB, SVHN, MNIST and Cars. ... The edges2shoes dataset. It consists of 48000 images of shoes first collected by [23].
Dataset Splits No The paper mentions using standard datasets like MNIST and SVHN and states "We also use all Y domain training images available, rather than just 2000." However, it does not provide specific percentages, sample counts, or explicit details of the train/validation/test splits used for their experiments.
Hardware Specification Yes Evaluating 100 analogies on a P100 GPU and for 64 × 64 images took 0.013s for VAE-NAM whereas NAM required 23s.
Software Dependencies No The paper mentions using a "standard DCGAN" and a "CRN [21] mapping function" but does not provide specific version numbers for these or any other software libraries, frameworks, or programming languages used.
Experiment Setup Yes Differently from NAM, a single learning rate (3e-3) was used across all parameters as all parameters are updated with every training batch.