Incremental few-shot learning via vector quantization in deep embedded space
Authors: Kuilin Chen, Chi-Guhn Lee
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in incremental learning. |
| Researcher Affiliation | Academia | Kuilin Chen Department of Mechanical and Industrial Engineering University of Toronto Toronto, Ontario, Canada kuilin.chen@mail.utoronto.ca Chi-Guhn Lee Department of Mechanical and Industrial Engineering University of Toronto Toronto, Ontario, Canada cglee@mie.utoronto.ca |
| Pseudocode | Yes | A.1 PSEUDO CODE FOR IDLVQ-C |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | CUB200-2011 (Welinder et al., 2010) and mini Image Net datasets (Vinyals et al., 2016)... 3D spatial data1 is collected in North Jutland, Denmark. 1https://archive.ics.uci.edu/ml/machine-learning-databases/00246/ |
| Dataset Splits | Yes | CUB dataset is composed of 200 fine-grained bird species with 11,788 images. We split the dataset into 5894 training images, 2947 validation images and 2947 test images... Each class contains 500 training images, 50 validation images, and 50 test images. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of an 'SGD optimizer' but does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The base model is trained by the SGD optimizer (momentum of 0.9 and weight decay of 1e-4) with a mini-batch size of 64. For CUB dataset, the initial learning rate is 0.01 and is decayed by 0.1 after 60 and 120 epochs (200 epochs in total)... In addition, we use λintra = 1.0 and λF = 0.5 for both datasets. |