Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile
Authors: Dong Chen, Lingfei Wu, Siliang Tang, Xiao Yun, Bo Long, Yueting Zhuang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that Eigen-Reptile significantly outperforms the baseline, Reptile, by 22.93% and 5.85% on the corrupted and clean dataset, respectively. 5. Experimental Results and Discussion |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, Zhe Jiang University, Hang Zhou, China 2JD.COM Silicon Valley Research Center, 675 E Middlefield Rd, Mountain View, CA 94043 USA. |
| Pseudocode | Yes | Algorithm 1 Eigen-Reptile |
| Open Source Code | Yes | The code and data for the proposed method are provided for research purposes https://github.com/Anfeather/Eigen-Reptile. |
| Open Datasets | Yes | We verify the effectiveness of Eigen-Reptile alleviate overfitting sampling noise on two clean few-shot classification datasets Mini-Imagenet (Vinyals et al., 2016) and CIFAR-FS (Bertinetto et al., 2018). The Mini-Imagenet dataset contains 100 classes, each with 600 images. We follow (Ravi & Larochelle, 2016) to divide the dataset into three disjoint subsets: meta-training set, meta-validation set, and meta-testing set with 64 classes, 16 classes, and 20 classes, respectively. |
| Dataset Splits | Yes | We follow (Ravi & Larochelle, 2016) to divide the dataset into three disjoint subsets: meta-training set, meta-validation set, and meta-testing set with 64 classes, 16 classes, and 20 classes, respectively. |
| Hardware Specification | Yes | All experiments run on a 2080 Ti. |
| Software Dependencies | No | The paper mentions software like Adam and PyTorch but does not specify their version numbers or other software dependencies with version details. |
| Experiment Setup | Yes | All meta-learners use the same regressor that is trained for 30000 iterations with inner loop steps 5, batch size 10, and a fixed inner loop learning rate of 0.02. Our model is trained for 100000 iterations with a fixed inner loop learning rate of 0.0005, 7 inner-loop steps and batch size 10. |