GACL: Exemplar-Free Generalized Analytic Continual Learning

Authors: HUIPING ZHUANG, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we conduct extensive experiments where, compared with existing GCIL methods, our GACL exhibits a consistently leading performance across various datasets and GCIL settings. Source code is available at https://github.com/CHEN-YIZHU/GACL.
Researcher Affiliation Academia 1South China University of Technology, China 2Southern University of Science and Technology, China 3Shenzhen Institute, Hunan University, China 4Pazhou Lab, China
Pseudocode Yes Algorithm 1 The pseudo-code of GACL.
Open Source Code Yes Source code is available at https://github.com/CHEN-YIZHU/GACL.
Open Datasets Yes We conduct experiments on three datasets: CIFAR-100 [38], Image Net-R [39], and Tiny-Image Net [40].
Dataset Splits Yes As the regularization term γ is not sensitive in a proper range [7], we adopt this value for all datasets for convenience. We relocate its analysis to Appendix E. The size for the buffer layer WB is set to 5000 for both the GACL and ACIL for convenience.
Hardware Specification Yes We conduct experiments in Py Torch on one Nvidia Geforce RTX 4090 GPU with a batch size of 64 for training and 128 for inference.
Software Dependencies No The paper mentions "Py Torch" but does not specify a version number or other software dependencies with version numbers.
Experiment Setup Yes There are two hyperparameters in the GACL, the regularization term γ and the size of the buffer layer. Here, we adopt γ = 100, which is determined by the grid search of {0, 10, 100, 500, 1000, 10000} on CIFAR-100 (by a 90%-10% train-val split). ... The size for the buffer layer WB is set to 5000 for both the GACL and ACIL for convenience.