Semantic Interpolation in Implicit Models

Authors: Yannic Kilcher, Aurelien Lucchi, Thomas Hofmann

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. and 3 EXPERIMENTS Experimental results. The setup used for the experiments presented below closely follows popular setups in GAN research and is detailed in the Appendix.
Researcher Affiliation Academia Yannic Kilcher, Aur elien Lucchi, Thomas Hofmann Department of Computer Science ETH Zurich {yannic.kilcher,aurelien.lucchi,thomas.hofmann}@inf.ethz.ch
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets Yes Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.
Dataset Splits No The paper does not explicitly provide specific details on training, validation, and test dataset splits, percentages, or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running the experiments.
Software Dependencies No The paper mentions software components like 'DCGAN architecture', 'ReLU nonlinearities', 'batch normalization', and 'RMSProp' but does not specify their version numbers.
Experiment Setup Yes The latent space for all models is of dimension 100 and the scale parameters for both the normal and gamma distributions are set to 1.0. The networks are trained using RMSProp with a learning rate of 0.0003 and mini-batches of size 100.