Discriminator Rejection Sampling
Authors: Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, Augustus Odena
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate the efficacy of DRS on a mixture of Gaussians and on the state of the art SAGAN model. On Image Net, we train an improved baseline that increases the best published Inception Score from 52.52 to 62.36 and reduces the Fr echet Inception Distance from 18.65 to 14.79. We then use DRS to further improve on this baseline, improving the Inception Score to 76.08 and the FID to 13.75. |
| Researcher Affiliation | Collaboration | Samaneh Azadi UC Berkeley Catherine Olsson Google Brain Trevor Darrell UC Berkeley Ian Goodfellow Google Brain Augustus Odena Google Brain |
| Pseudocode | Yes | Figure 1: ... Right: the DRS algorithm. |
| Open Source Code | No | The paper does not provide explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We investigate the impact of DRS on a low-dimensional synthetic data set consisting of a mixture of twenty-five 2D isotropic Gaussian distributions (each with standard deviation of 0.05) arranged in a grid (Dumoulin et al., 2016; Srivastava et al., 2017; Lin et al., 2017). We use a Self-Attention GAN (SAGAN) (Zhang et al., 2018) in our experiments... on the conditional Image Net synthesis task (in which images are synthesized conditioned on class identity)... Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Image Net Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211 252, 2015. doi: 10.1007/s11263-015-0816-y. |
| Dataset Splits | Yes | Keep Training continues training using early stopping on the validation set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or specific cloud instances) used for experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions using a 'pre-trained Inception classifier' and 'VGG16 model' but does not specify any software names with version numbers or programming language versions. |
| Experiment Setup | Yes | After reproducing the results reported by Zhang et al. (2018) (with the learning rate of 1e-4), we fine-tuned a trained SAGAN with a much lower learning rate (1e-7) for both generator and discriminator. We set γ dynamically to the 80th percentile of the F(x) values in each batch. Here, we have set γ dynamically for each batch, to the 95th percentile of ˆF(x) for all x in the batch. |