Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
FSL-Rectifier: Rectify Outliers in Few-Shot Learning via Test-Time Augmentation
Authors: Yunwei Bai, Ying Kiat Tan, Shiming Chen, Yao Shu, Tsuhan Chen
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally and theoretically demonstrate the effectiveness of our method, obtaining a test accuracy improvement proportion of around 10% (e.g., from 46.86% to 53.28%) for trained FSL models. In this work, we use the Animals dataset, which is sampled from the Image Net dataset (Russakovsky et al. 2015). Table 1 consists of experimental results of our proposed method over the Animals dataset. We conduct various experiments and analysis to verify the feasibility of our idea. |
| Researcher Affiliation | Academia | National University of Singapore, Mohamed bin Zayed University of Artificial Intelligence, Guangdong Lab of AI and Digital Economy (SZ) EMAIL, EMAIL, EMAIL, EMAIL. All listed institutions are universities or academic research labs, and email domains (u.nus.edu, gml.ac.cn) are academic, with one author using a personal email while affiliated with an academic institution. |
| Pseudocode | No | The paper describes the architecture and components (Image Combiner, Neighbour Selector, Augmentor) using prose and mathematical equations but does not include a distinct pseudocode or algorithm block. |
| Open Source Code | Yes | Code https://github.com/Wendy Bai Yunwei/FSL-Rectifier-Pub |
| Open Datasets | Yes | In this work, we use the Animals dataset, which is sampled from the Image Net dataset (Russakovsky et al. 2015). For further analysis, we also consider a mammal animal dataset, or the Mammals dataset (Asaniczka 2023) consisting of 45 testing classes. |
| Dataset Splits | Yes | The train-test split follows prior works (Liu et al. 2019a). During training of the image combiner, we only use the train-split of a dataset, leaving the test-split dataset unseen. We pass 25,000 5-way-1-shot queries to test a trained FSL model. |
| Hardware Specification | Yes | via one NVIDIA RTX A5000. |
| Software Dependencies | No | The paper mentions "Pytorch transform functions (Paszke et al. 2019)" but does not specify the version of PyTorch or other key libraries used for their implementation. |
| Experiment Setup | Yes | The learning rate is set to 1 10 4, and the maximum number of training iterations is 10,000. When testing our augmentation against the baseline, the neighbour selector considers 20 candidates for each neighbour selection. For our method, the combined weight of all augmentations and that of the original samples are both set to 0.5. |