Generative Ratio Matching Networks
Authors: Akash Srivastava, Kai Xu, Michael U. Gutmann, Charles Sutton
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we empirically compare GRAM-nets against MMD-GANs and vanilla GANs, on the Cifar10 and Celeb A image datasets. |
| Researcher Affiliation | Collaboration | Akash Srivastava MIT-IBM Watson AI Lab akash.srivastava@ibm.com Michael U. Gutmann University of Edinburgh michael.gutmann@ed.ac.uk Kai Xu University of Edinburgh kai.xu@ed.ac.uk Charles Sutton Google AI charlessutton@google.com |
| Pseudocode | Yes | Algorithm 1: Generative ratio matching |
| Open Source Code | Yes | 1Official implementations are available at https://github.com/GRAM-nets. |
| Open Datasets | Yes | In this section we empirically compare GRAM-nets against MMD-GANs and vanilla GANs, on the Cifar10 and Celeb A image datasets. |
| Dataset Splits | No | The paper states that FID is reported on a 'held-out set that was not used to train the models', implying a train/test split, but does not provide specific details on validation splits (e.g., percentages, sample counts) or how the data was partitioned into train, validation, and test sets. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers like ADAM and RMSprop, and points to an external implementation of MMD-GANs, but it does not specify version numbers for any key software dependencies or libraries required for reproduction. |
| Experiment Setup | Yes | To facilitate fair comparison with MMD-GAN we set all the hyperparameters shared across the three methods to the values used in Li et al. (2017). Therefore, we use a learning rate of 5e 5 and set the batch size to 64. |