MonoFlow: Rethinking Divergence GANs via the Perspective of Wasserstein Gradient Flows
Authors: Mingxuan Yi, Zhanxing Zhu, Song Liu
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Consistent empirical studies are included to validate the effectiveness of our framework. |
| Researcher Affiliation | Academia | 1University of Bristol. 2Changping National Laboratory, China. 3Peking University. Correspondence to: Mingxuan Yi <mingxuan.yi@bristol.ac.uk>. |
| Pseudocode | No | The paper contains mathematical equations and descriptions but no explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | All codes are available at https://github.com/Yi MX/Mono Flow. |
| Open Datasets | Yes | We use MNIST, CIFAR-10 (Krizhevsky et al., 2009) and Celeb-A (Liu et al., 2015) datasets in this experiment. |
| Dataset Splits | No | The paper states models are trained on "training sets" and evaluated on "test sets" but does not specify how these sets are split (e.g., percentages, methodology, or specific validation set usage). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The neural network architecture used here is DCGAN (Radford et al., 2015) and we follow the vanilla GAN framework where the log density ratio is obtained by logit output from the binary classifier and the model is trained with 15 epochs. ... The generator is initialized at: N 1.0 1.0 , 1.00 0.00 0.00 1.00 and the target distribution is : N 0.0 0.0 , 1.00 0.80 0.80 0.89 |