Task-Robust Model-Agnostic Meta-Learning
Authors: Liam Collins, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also provide an upper bound on the new task generalization error that captures the advantage of minimizing the worst-case task loss, and demonstrate this advantage in sinusoid regression and image classification experiments. |
| Researcher Affiliation | Academia | Liam Collins ECE Department University of Texas at Austin Austin, TX 78712 liamc@utexas.edu Aryan Mokhtari ECE Department University of Texas at Austin Austin, TX 78712 mokhtari@austin.utexas.edu Sanjay Shakkottai ECE Department University of Texas at Austin Austin, TX 78712 sanjay.shakkottai@utexas.edu |
| Pseudocode | Yes | Algorithm 1 Task-Robust MAML (TR-MAML) |
| Open Source Code | Yes | 1The code for TR-MAML is available at: https://github.com/lgcollins/tr-maml. |
| Open Datasets | Yes | We experiment in this setting using the Omniglot [19] and mini-Image Net [37] datasets. |
| Dataset Splits | Yes | We use the same (meta-) train/validation/test splits as in [36]. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Both algorithms use one SGD step as the inner learning algorithm, and the same fully-connected network architecture as in [12] for the learning model. ... We meta-train for 60,000 iterations with a batch size of 2 task instances, and 5 steps of gradient descent for local adaptation. |