Two Generator Game: Learning to Sample via Linear Goodness-of-Fit Test
Authors: Lizhong Ding, Mengyang Yu, Li Liu, Fan Zhu, Yong Liu, Yu Li, Ling Shao
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that DEAN achieves high quality generations compared to the state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | 1Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE. 2Institute of Information Engineering, CAS, China. 3King Abdullah University of Science and Technology (KAUST), Saudi Arabia. |
| Pseudocode | No | The paper describes the optimization procedures in text and mathematical equations, but does not include any explicit pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | The evaluations are conducted on three popular datasets, including MNIST [LBBH98] (70,000 images, 28 28), CIFAR-10 [KH09] (60,000 images, 32 32), and Celeb A [YLLT15] (202,599 face images, resized and cropped to 160 160). |
| Dataset Splits | No | The paper lists the total number of images for each dataset (MNIST: 70,000, CIFAR-10: 60,000, Celeb A: 202,599) but does not provide specific train/validation/test split percentages or counts needed for reproduction. |
| Hardware Specification | Yes | All models are trained on an NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions the use of the Adam optimizer but does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The input noise vector z R128 for the generator (IGN) is independently drawn from a standard normal distribution. ... We set the number of test locations J = 5 to compute the value of v FSSD. ... We adopt five EGN updates per IGN step. ... We use initial learning rates of 0.0001 for MNIST, CIFAR-10 and Celeb A. We use the Adam optimizer [KB15] with β1 = 0.5, β2 = 0.9. |