Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers
Authors: Piotr Teterwak, Chiyuan Zhang, Dilip Krishnan, Michael C Mozer
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We train the generator on TPUv3 accelerators using the same Image Net (Russakovsky et al., 2015) training set that the classifiers are trained on, and evaluate using the test set. |
| Researcher Affiliation | Collaboration | 1Presently at Boston University; work was begun while author was an AI Resident at Google Research 2Google Research 3University of Colorado, Boulder. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | We provide pre-trained models and visualizations at https://sites.google.com/view/ understanding-invariance/home. |
| Open Datasets | Yes | We train the generator on TPUv3 accelerators using the same Image Net (Russakovsky et al., 2015) training set that the classifiers are trained on, and evaluate using the test set. |
| Dataset Splits | No | No specific dataset split information (percentages, sample counts, or explicit splitting methodology) for training, validation, and test sets was found for general reproducibility. While 'validation sample logit vectors' are mentioned for a specific analysis, a complete split breakdown is not provided. |
| Hardware Specification | Yes | We train the generator on TPUv3 accelerators using the same Image Net (Russakovsky et al., 2015) training set that the classifiers are trained on, and evaluate using the test set. |
| Software Dependencies | No | No specific version numbers for key software components (e.g., libraries, frameworks) are provided, only names like 'Big GAN implementation' and 'Caffe'. |
| Experiment Setup | No | While some experiment-specific parameters (e.g., FGSM attack strength ǫ = 0.1, noise for logit perturbation N(µ = 0, σ2 = 0.55)) are mentioned, the main training configurations and hyperparameters (learning rate, batch size, epochs, optimizer) are stated to be 'as described in the Supplementary Materials', not explicitly in the main text. |