Image Synthesis with a Single (Robust) Classifier
Authors: Shibani Santurkar, Andrew Ilyas, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis.Samples (at resolution 224 224) produced by our method are shown in Figure 3 (also see Appendix B).Table 1 presents the IS of samples generated using a robust classifier.Over the Restricted Image Net test set, our approach yields a PSNR of 21.53 (95% CI [21.49, 21.58]) compared to 21.30 (95% CI [21.25, 21.35]) from bicubic interpolation. |
| Researcher Affiliation | Academia | Shibani Santurkar MIT shibani@mit.edu Dimitris Tsipras MIT tsipras@mit.edu Brandon Tran MIT btran115@mit.edu Andrew Ilyas MIT ailyas@mit.edu Logan Engstrom MIT engstrom@mit.edu Aleksander M adry MIT madry@mit.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or clearly labeled algorithm blocks. Procedures are described in text and mathematical formulas. |
| Open Source Code | Yes | Code and models for our experiments can be found at https://git.io/robust-apps. |
| Open Datasets | Yes | CIFAR-10 and Image Net (used in Table 1) are standard public datasets. References like [Kri09] for CIFAR-10 and [Rus+15] for ImageNet are provided, indicating public availability. |
| Dataset Splits | No | The paper mentions using a 'test set' and 'randomly selected examples from the test set', but it does not provide specific details on the train, validation, and test splits (e.g., percentages, sample counts, or citations to predefined splits) needed to fully reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions employing a 'generic classification setup (Res Net-50 [He+16])' but does not specify any software dependencies with version numbers (e.g., Python version, deep learning framework version like PyTorch/TensorFlow, or specific library versions). |
| Experiment Setup | No | The paper mentions using 'Res Net-50 [He+16] with default hyperparameters' and 'projected gradient descent (PGD)', referring to Appendix A for 'experimental details'. However, the main text does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific training configuration settings. |