Dual-Level Curriculum Meta-Learning for Noisy Few-Shot Learning Tasks
Authors: Xiaofan Que, Qi Yu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments that demonstrate the effectiveness of our framework in outperforming existing noisy few-shot learning methods under various few-shot classification benchmarks. |
| Researcher Affiliation | Academia | Xiaofan Que, Qi Yu Rochester Institute of Technology {xq5054, qi.yu}@rit.edu |
| Pseudocode | Yes | The detailed training process is summarized in Algorithm 1 of the Appendix (Que and Yu 2024). |
| Open Source Code | Yes | Our code is available at https://github.com/ritmininglab/DCML. |
| Open Datasets | Yes | We evaluate the effectiveness of the proposed dual-level curriculum meta-learning framework (i.e., DCML) using three benchmark datasets: mini Image Net (Ravi and Larochelle 2017), FC100 (Oreshkin, López, and Lacoste 2018), Omniglot (Lake et al. 2011) for few-shot learning along with three real-world noisy datasets: mini WV (Li et al. 2017), Food101 (Bossard, Guillaumin, and Van Gool 2014) and CIFAR-100N (Wei et al. 2022). |
| Dataset Splits | No | The paper lists standard datasets but does not provide explicit details about the train/validation/test splits used for the experiments (e.g., percentages, sample counts, or specific split files), nor does it reference a specific citation for the splits used. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA A100 GPU with three runs (RIT Research Computing 2019). |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as libraries or programming languages used in the experiments. |
| Experiment Setup | No | The paper states, "For detailed noise and training hyperparameter settings, please refer to the Appendix." As these details are deferred to supplementary materials and not explicitly provided in the main text, it does not meet the criteria for being explicitly in the paper's main content. |