Consistent MetaReg: Alleviating Intra-task Discrepancy for Better Meta-knowledge
Authors: Pinzhuo Tian, Lei Qi, Shaokang Dong, Yinghuan Shi, Yang Gao
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiments also demonstrate the effectiveness of our method, which indeed improves the performances of the state-of-the-art gradient-based meta-learning models in the few-shot classification task. |
| Researcher Affiliation | Academia | National Key Laboratory for Novel Software Technology, Nanjing University, China |
| Pseudocode | Yes | Algorithm 1 The proposed CM for meta supervised learning |
| Open Source Code | Yes | Source code: https://github.com/P1nzhuo/Consistent-Meta Reg |
| Open Datasets | Yes | We also validate CM on three benchmark datasets, i.e., mini Image Net [Vinyals et al., 2016], tiered Image Net [Ren et al., 2018] and office31 [Saenko et al., 2010]. |
| Dataset Splits | Yes | These classes are randomly split into 64, 16 and 20 classes for meta-training, meta-validation, and meta-testing respectively. |
| Hardware Specification | Yes | All methods are trained on a single NVIDIA 1080 Ti. |
| Software Dependencies | No | The paper mentions "Adam with learning rate 0.001 is used as optimizer" and "existing deep learning frameworks (e.g., Pytorch and Tensorflow)" but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Adam with learning rate 0.001 is used as optimizer. During the metatraining process, we sample 5 training (support) samples and 5 test (query) samples as Str i and Sts i in each task Ti, respectively. The number of test example for each task is set as 10 during the meta-testing phase. The meta batch-size is set as 4 and 8 for MAML and the other methods. The parameter δ is set as 1 for MAML and 5 for Meta Opt Net-SVM and R2-D2. |