Efficient K-Shot Learning With Regularized Deep Networks
Authors: Donghyun Yoo, Haoqi Fan, Vishnu Boddeti, Kris Kitani
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method can be easily applied to several popular convolutional neural networks and improve upon other state-of-the-art fine-tuning based k-shot learning strategies by more than 10% of accuracy. |
| Researcher Affiliation | Collaboration | 1The Robotics Institute, School of Computer Science, Carnegie Mellon University 2Facebook 3Michigan State University |
| Pseudocode | Yes | Algorithm 1: Grouping and average gradient update algorithm |
| Open Source Code | No | The paper does not provide concrete access to source code, nor does it state that the code is available in supplementary materials or via a specific repository link. |
| Open Datasets | Yes | Our pre-trained network is the Res Net-18 architecture by (He et al. 2016) trained on the Image Net dataset. For this task, we consider the Office dataset introduced by (Saenko et al. 2010). Our pre-trained network is the Res Net-18 architecture trained on the CIFAR-100 dataset while the k-shot learning task is classification on the CIFAR-10 dataset. |
| Dataset Splits | Yes | Aft is the accuracy of the fine-tuned network of which parameters are clustered and calculated on the validation set. The k-shot data are chosen randomly from the target training set for fine-tuning and we evaluate on the entire target test set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | For fine-tuning, the learning-rate is 0.01, and it is changed to 0.001 after 1000 iteration. |