A Convex Duality Framework for GANs
Authors: Farzan Farnia, David Tse
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate our theoretical results, we used the Celeb A [20] and LSUN-bedroom [21] datasets. Furthermore, in the Appendix we include the results of our experiments over the MNIST [22] dataset. We considered vanilla GAN [1] with the minimax formulation in (17) and DCGAN [23] convolutional architecture for the neural net discriminator and generator. |
| Researcher Affiliation | Academia | Farzan Farnia farnia@stanford.edu David Tse dntse@stanford.edu Department of Electrical Engineering, Stanford University. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: "We used the code provided by [13]" which refers to another work's code, but does not provide access to their own implementation code for the methodology described. |
| Open Datasets | Yes | To evaluate our theoretical results, we used the Celeb A [20] and LSUN-bedroom [21] datasets. Furthermore, in the Appendix we include the results of our experiments over the MNIST [22] dataset. |
| Dataset Splits | No | Figure 2 shows how the discriminator loss evaluated over 2000 validation samples, which is an estimate of the divergence measure, changed as we trained the DCGAN over LSUN samples. While it mentions "2000 validation samples", it does not provide specific percentages or methodology for the training/validation/test split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions "Adam optimizer [24]" but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | We used the code provided by [13] and trained DCGAN via Adam optimizer [24] for 200,000 generator iterations. We applied 5 discriminator updates per generator update. |