A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning
Authors: Da-Wei Zhou, Qi-Wei Wang, Han-Jia Ye, De-Chuan Zhan
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets validate MEMO s competitive performance. |
| Researcher Affiliation | Academia | State Key Laboratory for Novel Software Technology, Nanjing University {zhoudw, wangqiwei, yehj, zhandc}@lamda.nju.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https://github.com/wangkiw/ICLR23-MEMO |
| Open Datasets | Yes | we evaluate the performance on CIFAR100 (Krizhevsky et al., 2009), and Image Net100/1000 (Deng et al., 2009). |
| Dataset Splits | Yes | CIFAR100 contains 50,000 training and 10,000 testing images, with a total of 100 classes. Image Net is a large-scale dataset with 1,000 classes, with about 1.28 million images for training and 50,000 for validation. The class order of training classes is shuffled with random seed 1993. |
| Hardware Specification | Yes | All models are deployed with Py Torch (Paszke et al., 2019) and Py CIL (Zhou et al., 2021a) on NVIDIA 3090. |
| Software Dependencies | No | The paper mentions PyTorch and PyCIL with citations but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | The model is trained with a batch size of 128 for 170 epochs, and we use SGD with momentum for optimization. The learning rate starts from 0.1 and decays by 0.1 at 80 and 150 epochs. |