Towards Context-Agnostic Learning Using Synthetic Data
Authors: Charles Jin, Martin Rinard
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically validate our methods by training deep neural networks for a variety of real-world image classification tasks using only a single synthetic example of each class, obtaining robust performance in the context-agnostic setting on natural data. Conversely, we find that classifiers trained without our techniques using only natural data achieve negligible accuracy even under relatively benign perturbations that leave a well-defined object in the foreground completely untouched. |
| Researcher Affiliation | Academia | Charles Jin MIT Cambridge, MA 02139 ccj@csail.mit.edu Martin Rinard MIT Cambridge, MA 02139 rinard@csail.mit.edu |
| Pseudocode | Yes | Algorithm 1: Greedy Bias Correction |
| Open Source Code | No | The paper uses PyTorch and mentions a PyTorch library for CAM methods (jacobgil/pytorch-grad-cam) in its references. However, it does not explicitly state that the source code for *their* proposed methodology is publicly available, nor does it provide a link to *their* repository. |
| Open Datasets | Yes | The target dataset is the German Traffic Sign Recognition Benchmark (GTSRB) [Stallkamp et al., 2012]... The target dataset, MNIST [Le Cun], consists of 60,000 training and 10,000 test images... We use the Omniglot [Lake et al., 2015] challenge... |
| Dataset Splits | No | The paper mentions training and testing datasets (e.g., '60,000 training and 10,000 test images' for MNIST). However, it does not explicitly specify the size, percentage, or method for a validation split in the main text. |
| Hardware Specification | Yes | Our implementation is written in Python (Paszke et al., 2019) using the PyTorch deep learning framework, and all experiments were carried out on a single NVIDIA 2080 Ti GPU. |
| Software Dependencies | No | Our implementation is written in Python (Paszke et al., 2019) using the PyTorch deep learning framework... While PyTorch is cited with a year, the specific version number for PyTorch, Python, or other libraries like OpenCV (mentioned in references) or scikit-image is not provided. |
| Experiment Setup | Yes | Appendix C provides the full experimental setup and training details. Our models are trained using the Adam optimizer with default parameters ( = 0.9, = 0.999), and a batch size of 10 for all datasets, for 100 epochs. |