XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning
Authors: Sung Whan Yoon, Do-Yeon Kim, Jun Seo, Jaekyun Moon
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on standard image datasets indicate that Xtar Net achieves state-of-the-art incremental few-shot learning performance. |
| Researcher Affiliation | Academia | 1School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea 2School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea. |
| Pseudocode | Yes | A pseudocode-style algorithm description of Xtar Net is given in Supplementary Material. |
| Open Source Code | Yes | 1Codes are available on https://github.com/ Edwin Kim3069/Xtar Net |
| Open Datasets | Yes | We follow the settings of incremental few-shot classification tasks proposed in (Gidaris & Komodakis, 2018; Ren et al., 2019) which are based on the mini Image Net and tiered Image Net datasets. |
| Dataset Splits | Yes | Dataset Splits: Dbase denotes the dataset used in pretraining, and Dbase/train, Dbase/val and Dbase/test are training, validation and test sets of Dbase, respectively. For incremental setting, dataset Dnovel is for providing novel categories. The dataset consists of three splits of Dnovel/train, Dnovel/val and Dnovel/test, containing disjoint categories. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used for running its experiments. It only mentions model architectures like ResNet12 and ResNet18. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as Python, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | In the meta-learning phase, the Momentum SGD optimizer with an initial learning rate of 0.1 is utilized. The learning rate is dropped by a factor of 10 at every step of 4,000 episodes. For regularization in meta-learning, l2 regularization with the ratio 3.0 10 3 is used. (For mini Image Net) and l2 regularization with the ratio 7.0 10 4 is used (For tiered Image Net). |