Understanding Few-Shot Learning: Measuring Task Relatedness and Adaptation Difficulty via Attributes
Authors: Minyang Hu, Hong Chang, Zong Guo, Bingpeng MA, Shiguang Shan, Xilin Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate our theoretical results, we conduct experiments on three benchmarks. Our experimental results confirm that TAD metric effectively quantifies the task relatedness and reflects the adaptation difficulty on novel tasks for various FSL methods, even if some of them do not learn attributes explicitly or human-annotated attributes are not provided. |
| Researcher Affiliation | Academia | Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/hu-my/Task Attribute Distance. |
| Open Datasets | Yes | We choose three widely used benchmarks: (1) CUB-200-2011 (CUB) [49]: (2) SUN with Attribute (SUN) [34]: (3) mini Image Net [48]: |
| Dataset Splits | Yes | We follow [23] to split the dataset into 100 training classes, 50 validation classes and 50 test classes. [...] We split the dataset into 430/215/72 classes for training/validation/test, respectively. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions the use of CLIP model but does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We adopt a four-layer convolution network (Conv-4) with an additional MLP as the meta-learner fθ. [...] For base-learner gϕi parameterized by ϕi, we simply choose an nonparametric base-learner like Proto Net [42]. [...] We train APNet by simultaneously minimizing the attribute classification loss and the few-shot classification loss. [...] See more implementation and experimental details in the Appendix. |