Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning

Authors: Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the effectiveness of MRFA on various CIL scenarios.
Researcher Affiliation Academia Bowen Zheng 1 2 Da-Wei Zhou 1 2 Han-Jia Ye 1 2 De-Chuan Zhan 1 2 1School of Artificial Intelligence, Nanjing University, China 2National Key Laboratory for Novel Software Technology, Nanjing University, China.
Pseudocode Yes Algorithm 1 Multi-layer Rehearsal Feature Augmentation
Open Source Code Yes The code is available on Git Hub1. 1https://github.com/bwnzheng/MRFA_ ICML2024
Open Datasets Yes Following most of the image classification benchmarks in CIL (Rebuffi et al., 2017), we use CIFAR100 and Image Net100 to train the model incrementally. CIFAR100 (Krizhevsky, 2009) has 50,000 training and 10,000 testing samples with 100 classes in total. ... Image Net100 (Deng et al., 2009) has 1,300 training samples and 50 test samples for each class.
Dataset Splits Yes CIFAR100 (Krizhevsky, 2009) has 50,000 training and 10,000 testing samples with 100 classes in total. ... Image Net100 (Deng et al., 2009) has 1,300 training samples and 50 test samples for each class. ... There are two common types of splits in CIL. The small base one equally divides all of the classes in a dataset (Rebuffi et al., 2017). The large base one uses half of the classes in a dataset as the base task (task 0), and equally divides the remaining classes (Hou et al., 2019; Yu et al., 2020).
Hardware Specification No The paper does not explicitly state the hardware used to run its experiments. It mentions using ResNet and ViT-based backbones but does not specify CPU/GPU models, memory, or other hardware details.
Software Dependencies No The experiments based on Res Net are implemented with the open-source code Py CIL (Zhou et al., 2023a). The experiments based on Vi T-based backbones are implemented with the open-source code of Dy Tox (Douillard et al., 2022). The paper mentions specific open-source codes used but does not provide version numbers for these libraries or for underlying software like Python, PyTorch, or CUDA.
Experiment Setup Yes Table 6. Training settings for baselines in section 5.2. Baselines # of Base # of Incremental Batch Size Learning Rate # of GPUs Epochs Epochs Replay 200 70 128 0.1 1 i Ca RL 200 170 128 0.1 1 FOSTER 200 170 128 0.1 1 Dy Tox+ 500 500 128 5e-4 2 (CIFAR100) 4 (Image Net100). ... Hyperparameter Sensitivity. MRFA requires one hyperparameter β which controls the maximum scale of the feature augmentations. ... The values of the hyperparameter β for each experiment in section 5.2 are listed in Table 5.