Delve into Base-Novel Confusion: Redundancy Exploration for Few-Shot Class-Incremental Learning
Authors: Haichen Zhou, Yixiong Zou, Ruixuan Li, Yuhua Li, Kui Xiao
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across benchmark datasets, including CIFAR-100, mini Image Net, and CUB-2002011 demonstrate that our method achieves stateof-the-art performance. |
| Researcher Affiliation | Academia | Haichen Zhou1 , Yixiong Zou1 , Ruixuan Li1 , Yuhua Li1 and Kui Xiao2 1Huazhong University of Science and Technology 2Hubei University |
| Pseudocode | No | The paper describes its method in prose and mathematical equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | Our method is evaluated on CIFAR-100 [Krizhevsky et al., 2009], mini Image Net [Russakovsky et al., 2015], and CUB200 [Wah et al., 2011]. |
| Dataset Splits | No | The paper describes the incremental learning setup (e.g., 60 base classes and 40 novel classes across 8 incremental sessions), but it does not specify explicit train/validation/test dataset splits in terms of percentages or fixed sample counts that are commonly provided for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions utilizing ResNet-12 and pretrained ResNet-18 models but does not specify the versions of any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used. |
| Experiment Setup | Yes | Our training protocol follows [Yang et al., 2023], utilizing the Res Net-12 for CIFAR-100 and mini Image Net, and pretrained Res Net-18 for CUB200. Batch sizes of 128 and 100 are employed for the base session and incremental sessions, respectively. ... λ and β are hyper-parameres. |