A Unified and General Framework for Continual Learning

Authors: Zhenyi Wang, Yan Li, Li Shen, Heng Huang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.
Researcher Affiliation Collaboration 1University of Maryland, College Park 2JD Explore Academy {zwang169, yanli18, heng}@umd.edu, mathshenli@gmail.com
Pseudocode Yes Algorithm 1 Refresh Learning for General CL.
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology or a direct link to a code repository.
Open Datasets Yes We perform experiments on various datasets, including CIFAR10 (10 classes), CIFAR100 (100 classes), Tiny-Image Net (200 classes)
Dataset Splits Yes Following Buzzega et al. (2020), we divided the CIFAR-10 dataset into five separate tasks, each containing two distinct classes. Similarly, we partitioned the CIFAR-100 dataset into ten tasks, each has ten classes. Additionally, for Tiny-Image Net, we organized it into ten tasks, each has twenty classes.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instances).
Software Dependencies No The paper mentions using 'Res Net18' and adopting hyperparameters from 'DER++ codebase', but it does not specify versions for any software dependencies (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We adopt the hyperparameters from the DER++ codebase (Buzzega et al., 2020) as the baseline settings for all the methods we compared in the experiments. Additionally, to enhance runtime efficiency in our approach, we implemented the refresh mechanism, which runs every two iterations. Table 5: Analysis of unlearning rate γ and number of unlearning steps J on CIFAR100 with task-IL. γ 0.02 0.03 0.04 Accuracy 77.23 0.97 77.71 0.85 77.08 0.90 J 1 2 3 Accuracy 77.71 0.85 77.76 0.82 75.93 1.06