Online Structured Meta-learning
Authors: Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui (Jessie) Li, Richard Socher, Caiming Xiong
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework in the context of both homogeneous and heterogeneous tasks. |
| Researcher Affiliation | Collaboration | 1Pennsylvania State University, 2Salesforce Research |
| Pseudocode | Yes | Algorithm 1 Online Meta-learning Pipeline of OSML |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Here, we follow [10] and create a Rainbow MNIST dataset, which contains a sequence of tasks generated from the original MNIST dataset. ... The first dataset is generated from mini-Imagenet. ... We create the second dataset called Meta-dataset by following [37, 41]. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits. It mentions support and query sets within tasks, and a test set for evaluation, but no dedicated validation split. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models or memory) are provided for the experimental setup. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Require: β1, β2, β3,β4, β5: learning rates ... We report hyperparameters and model structures in Appendix A.2. |