Data Augmentation for Meta-Learning
Authors: Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our proposed meta-specific data augmentation significantly improves the performance of meta-learners on few-shot classification benchmarks. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, University of Maryland, College Park 2Microsoft. |
| Pseudocode | Yes | Algorithm 1 Meta-Max Up |
| Open Source Code | Yes | A Py Torch implementation of our data augmentation methods for meta-learning can be found at: https://github.com/Renkun Ni/Meta Aug |
| Open Datasets | Yes | In this paper, we perform our experiments on the mini Image Net and CIFAR-FS datasets as well as the Meta Dataset benchmark (Vinyals et al., 2016; Bertinetto et al., 2018; Triantafillou et al., 2019). |
| Dataset Splits | Yes | Each of these datasets contains 64 training classes, 16 validation classes, and 20 classes for testing. |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models, memory, or detailed computer specifications) used for running experiments were found in the main text of the paper. |
| Software Dependencies | No | The paper mentions 'Py Torch implementation' but does not specify its version number or other key software dependencies with their versions. |
| Experiment Setup | No | The paper states 'A description of training hyperparameters and computational complexity can be found in Appendix C.', but specific hyperparameter values or detailed training configurations are not provided in the main text. |