Efficient Meta Learning via Minibatch Proximal Update
Authors: Pan Zhou, Xiaotong Yuan, Huan Xu, Shuicheng Yan, Jiashi Feng
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several few-shot regression and classification tasks demonstrate the advantages of our method over state-of-the-arts. |
| Researcher Affiliation | Collaboration | Learning & Vision Lab, National University of Singapore, Singapore B-DAT Lab, Nanjing University of Information Science & Technology, Nanjing, China Alibaba and Georgia Institute of Technology, USA YITU Technology, Shanghai, China |
| Pseudocode | Yes | Algorithm 1 SGD for Meta-Minibatch Prox |
| Open Source Code | Yes | The code is available at https://panzhous.github.io. |
| Open Datasets | Yes | mini Image Net [5] and tiered Image Net [43] |
| Dataset Splits | Yes | Following [6, 10], we use the split proposed in [5], which consists of 64 classes for training, 16 classes for validation and the remaining 20 classes for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions optimizers like SGD and Adam, but does not provide specific version numbers for any software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | For our Meta-Minibatch Prox, we set λ = 0.5 and use SGD to solve the inner subproblem with 15 steps of iteration with learning rate 0.02. For the learning rate ηs in Meta-Minibatch Prox, we decrease it at each iteration as ηs = α(1 s/S) where the total iteration number S in Algorithm 1 and α are set to 30, 000 and 0.8, respectively. |