Manifold-Valued Image Generation with Wasserstein Generative Adversarial Nets
Authors: Zhiwu Huang, Jiqing Wu, Luc Van Gool3886-3893
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On the three datasets, we experimentally demonstrate the proposed manifold-aware WGAN model can generate more plausible manifold-valued images than its competitors. |
| Researcher Affiliation | Academia | Computer Vision Lab, ETH Zurich, Switzerland VISICS, KU Leuven, Belgium |
| Pseudocode | Yes | Algorithm 1 Manifold-aware Wasserstein GAN (manifold WGAN), our proposed algorithm. |
| Open Source Code | No | The paper mentions 'The official code is available at https://github.com/igul222/improved_wgan_training' which refers to an existing WGAN implementation and not the code for the authors' proposed method. |
| Open Datasets | Yes | For the studied manifold-valued image generation problem, we suggest three benchmark evaluations that use the HSV and CB images of the well-known CIFAR-10 (Krizhevsky and Hinton 2009), Image Net (Oord, Kalchbrenner, and Kavukcuoglu 2016), and the popular UCL DT image dataset (Cook et al. 2006). |
| Dataset Splits | Yes | We use the 64 64 version of Image Net, which contains 1,281,149 training images and 49,999 images for testing. |
| Hardware Specification | No | The paper mentions 'We would like to thank Nvidia for donating the GPUs used in this work.' but does not specify any particular GPU models or other hardware details. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming language versions or library versions. |
| Experiment Setup | Yes | We finally optimize the network using Adam with learning rate 0.0002, decayed linearly to 0 over 100K generator iterations, and batch size 64. |