Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning

Authors: Do-Yeon Kim, Dong-Jun Han, Jun Seo, Jaekyun Moon

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results confirm the effectiveness of our solution and show the improved performance over the state-of-the-art methods.
Researcher Affiliation Collaboration 1 Korea Advanced Institute of Science & Technology, 2 Purdue Univirsity, 3 LG AI Research
Pseudocode No The paper describes the overall procedure in prose, but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Edwin Kim3069/Wa RP-CIFSL.
Open Datasets Yes We evaluate our method on three benchmark datasets, CIFAR100 (Krizhevsky et al., 2019), mini Image Net (Vinyals et al., 2016) and CUB200 (Wah et al., 2011), in the CIFSL setting.
Dataset Splits Yes When searching the learning rate, we randomly split trainset of base classes into train/validation so that the validation set has a ratio of about 10% in total. We also randomly sample the data from trainset of novel classes (different from the 5-shots that we use for training) and use them as validation set for novel classes.
Hardware Specification Yes We run the simulations 5 times with different random seeds using NVIDIA Ge Force RTX 3090 GPU machine and report average values.
Software Dependencies No The paper mentions using a 'Pytorch s built-in pretrained model' but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We use SGD optimizer with a momentum of 0.9 throughout the whole sessions for all our simulations. We pretrain the model with the batch size of 128 in the first session for all datasets. The number of pretraining epochs is 210 for Res Net18 on both CIFAR100 and mini Image Net and 300 for Res Net20 on CIFAR100.