Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning
Authors: Zhiqiang Shen, Zechun Liu, Jie Qin, Marios Savvides, Kwang-Ting Cheng9594-9602
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on CUB and mini-Image Net to demonstrate the effectiveness of our proposed method. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University 2Hong Kong University of Science and Technology 3Inception Institute of Artificial Intelligence |
| Pseudocode | Yes | Algorithm 1: Evolutionary algorithm for searching the best fine-tuning configuration. |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to code repositories. |
| Open Datasets | Yes | We verify our method for few-shot learning on both mini-Image Net dataset and CUB200-2012 dataset. mini Image Net dataset... (Deng et al. 2009)... CUB200-2011 contains 200 classes of birds (Wah et al. 2011). |
| Dataset Splits | Yes | We follow (Ravi and Larochelle 2017) to split the data into 64 base classes, 16 validation classes and 20 novel classes. Follow (Hilliard et al. 2018), we split the data into 100 base classes, 50 validation classes and 50 novel classes. |
| Hardware Specification | Yes | For example, using one V100 GPU, our search algorithm only takes 6 hours with Conv6 backbone and one day with Res Net-12 backbone on average. |
| Software Dependencies | No | The paper mentions optimizers like Adam and SGD but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | In training, we train 60,000 episodes for 1-shot and 40,000 episodes for 5-shot tasks on the base dataset... We adopt Adam optimizer with learning rate of 1e-3 for training. In fine-tuning, we use SGD with 0.01 learning rate... For Algorithm 1, we set population size P = 20, max iterations I = 20, and number of random sampling (R), mutation (M) and crossover (C) to 50. |