Online Constrained Meta-Learning: Provable Guarantees for Generalization
Authors: Siyuan Xu, Minghui Zhu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Moreover, we provide a practical algorithm for the framework and validate its superior effectiveness through experiments conducted on meta-imitation learning and few-shot image classification. |
| Researcher Affiliation | Academia | Siyuan Xu & Minghui Zhu School of Electrical Engineering and Computer Science The Pennsylvania State University University Park, PA 16801 {spx5032, muz16}@psu.edu |
| Pseudocode | Yes | Algorithm 1 Online Constrained Meta-Learning Framework |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We test the algorithms on two few-shot learning datasets, CUB [53] and mini-Image Net [52]. |
| Dataset Splits | No | The paper describes the number of data samples used for training and validation for each task (e.g., '|Dtr 0 | = 50', '|Dval 0 | = 50'), but it does not specify fixed training/validation/test splits (e.g., 80/10/10 percentages or specific counts) for the overall datasets that are needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions software like PyTorch and optimizers like Adam, but it does not provide specific version numbers for these software components, which are necessary for reproducible descriptions. |
| Experiment Setup | Yes | In the experiments, the total number of tasks is T = 100. For each task, the number of training data |Dtr 0 | = 50, |Dtr + | = 50, and the validation data |Dval 0 | = 50. The regularization parameter λ = 0.1, and the perturbation parameter η = 0.01. ... For Few-shot image classification, the total number of tasks is T = 200. We consider 5-way 1-shot and 5-way 5-shot learning. The training data |Dtr 0 | = 5 and |Dtr + | = 5 for 5-shot learning, and |Dtr 0 | = 1 and |Dtr + | = 1 for 1-shot learning. The validation data |Dval 0 | = 50. |