Compositional Few-Shot Class-Incremental Learning
Authors: Yixiong Zou, Shanghang Zhang, Haichen Zhou, Yuhua Li, Ruixuan Li
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three datasets validate our method, showing it outperforms current state-of-the-art methods with improved interpretability. Our code is available at https://github.com/Zoilsen/Comp-FSCIL. and 4. Experiments. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China 2School of Computer Science, Peking University, Beijing, China. |
| Pseudocode | No | The paper describes the proposed method in prose and diagrams (e.g., Figure 2) but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Zoilsen/Comp-FSCIL. |
| Open Datasets | Yes | Datasets are listed in Tab.3. Our method is based on the code of CEC (Zhang et al., 2021). For mini Image Net, we follow NC-FSCIL (Yibo Yang, 2023) to utilize Res Net12 (He et al., 2016) as the backbone network... For CIFAR100... For CUB200... and mini Image Net (Vinyals et al., 2016)... CIFAR100 (Krizhevsky et al., 2009)... CUB-200-2011 (CUB200) (Wah et al., 2011). |
| Dataset Splits | Yes | mini Image Net... 60 classes are utilized as base classes, while the remaining 40 classes are divided into 8 sessions for incremental learning, where only 5 training samples are available for each novel class. and CIFAR100... 60 classes are selected as base classes, and the remaining 40 classes are divided into 8 incremental sessions with 5 training samples in each novel class. and CUB-200-2011 (CUB200)... 100 classes are selected as base classes, and the remaining 100 classes are separated into 10 sessions for incremental learning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper mentions the use of specific backbone networks like ResNet12 and Swin Transformer, but does not provide specific version numbers for software dependencies such as programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | For mini Image Net, we follow NC-FSCIL (Yibo Yang, 2023) to utilize Res Net12 (He et al., 2016) as the backbone network, and we set λ1 = λ2 = 2.0, α = 0.8. For CIFAR100... We set λ1 = λ2 = 2.0, α = 0.6. For CUB200... We set λ1 = λ2 = 0.01, α = 0.5. and τ is a temperature parameter. and scale the learning rate of the backbone network to 10% of that in the FC layer. |