It Takes (Only) Two: Adversarial Generator-Encoder Networks
Authors: Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The AGE approach is evaluated on a number of standard image datasets, where we show that the quality of generated samples is comparable to that of GANs (Goodfellow et al. 2014; Radford, Metz, and Chintala 2016), and the quality of reconstructions is comparable or better to that of the more complex Adversarially-Learned Inference (ALI) approach of (Dumoulin et al. 2017), while training faster. |
| Researcher Affiliation | Collaboration | Dmitry Ulyanov Skolkovo Institute of Science and Technology, Yandex dmitry.ulyanov@skoltech.ru Andrea Vedaldi University of Oxford vedaldi@robots.ox.ac.uk Victor Lempitsky Skolkovo Institute of Science and Technology lempitsky@skoltech.ru |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found. |
| Open Source Code | No | No statement explicitly providing access to the authors' source code for the described methodology was found. |
| Open Datasets | Yes | We evaluate unconditional AGE networks on several standard datasets, while treating the system (Dumoulin et al. 2017) as the most natural reference for comparison... In Figure 2, we present the results on the challenging Tiny Image Net dataset (Russakovsky et al. 2015) and the SVHN dataset (Netzer et al. 2011)... In Figure 3, we further compare the reconstructions of Celeb A (Liu et al. 2015) images... For the model trained on CIFAR-10 dataset we compute Inception score (Salimans et al. 2016)... We also computed log likelihood for AGE and ALI on the MNIST dataset... We perform the colorization experiments on Stanford Cars dataset (Krause et al. 2013)... |
| Dataset Splits | No | The paper uses standard datasets (e.g., CIFAR-10, SVHN, Celeb A, Tiny Image Net, MNIST, Stanford Cars) but does not provide explicit details on specific train/validation/test splits (e.g., percentages, sample counts, or explicit references to predefined splits beyond just naming the dataset). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments were mentioned. |
| Software Dependencies | No | The paper mentions using 'ADAM (Kingma and Ba 2015) optimizer' but does not specify any software libraries or dependencies with version numbers (e.g., TensorFlow version, PyTorch version, Python version). |
| Experiment Setup | Yes | Hyper-parameters: We use ADAM (Kingma and Ba 2015) optimizer with the learning rate of 0.0002. We perform two generator updates per one encoder update for all datasets. For each dataset we tried λ {500, 1000, 2000} and picked the best one. We ended up using μ = 10 for all datasets. The dimensionality M of the latent space was manually set according to the complexity of the dataset. We thus used M = 64 for Celeb A and SVHN datasets, and M = 128 for the more complex datasets of Tiny Image Net and CIFAR-10. |