Resolving Task Confusion in Dynamic Expansion Architectures for Class Incremental Learning

Authors: Bingchen Huang, Zhineng Chen, Peng Zhou, Jiayin Chen, Zuxuan Wu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on CIFAR100 and Image Net100 datasets. The results demonstrate that TCIL consistently achieves state-of-the-art accuracy. It mitigates both ITC and ONC, while showing advantages in battle with catastrophic forgetting even no rehearsal memory is reserved. Source code: https://github.com/Yellow Pancake/TCIL.
Researcher Affiliation Academia 1Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University 2Shanghai Collaborative Innovation Center on Intelligent Visual Computing 3University of Maryland, College Park, MD, USA
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Source code: https://github.com/Yellow Pancake/TCIL.
Open Datasets Yes We conduct extensive experiments on CIFAR100 (Krizhevsky 2009) and Image Net100 (Russakovsky et al. 2015)
Dataset Splits No The paper mentions training and testing protocols but does not explicitly provide specific details or percentages for a separate validation dataset split.
Hardware Specification Yes All models are trained by using a workstation with 1 Nvidia 3090 GPU on Pytorch.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes We adopt SGD optimizer with weight decay 0.0005 and batch size 128 for all experiments. We use the warmup strategy with the ending learning rate 0.1 for 10 epochs in CIFAR100 and 20 epochs in Image Net100, respectively. After the warmup, for CIFAR100 the learning rate is 0.1 and decays to 0.01 and 0.001 at 100 and 120 epochs. For Image Net100 the learning rate decays to 0.01, 0.001 and 0.0001 at 60, 120 and 180 epochs.