DDGR: Continual Learning with Deep Diffusion-based Generative Replay

Authors: Rui Gao, Weiwei Liu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments in class incremental (CI) and class incremental with repetition (CIR) settings demonstrate the advantages of DDGR. Our code is available at https://github.com/ xiaocangsheng GR/DDGR.
Researcher Affiliation Academia 1School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, China.
Pseudocode Yes Algorithm 1 Deep Diffusion-based Generative Replay and Algorithm 2 Instruction Process
Open Source Code Yes Our code is available at https://github.com/ xiaocangsheng GR/DDGR.
Open Datasets Yes We conduct experiments on two widely used datasets: CIFAR-100 and Image Net (Deng et al., 2009). Moreover, in CIR scenario, we use CORe50 (Lomonaco & Maltoni, 2017) to conduct experiments.
Dataset Splits No The paper describes how datasets are divided into tasks for continual learning scenarios, such as 'initial task consists of 50 random classes' and 'each subsequent task contains five classes'. It defines evaluation on a 'test dataset S0:i test' but does not explicitly detail a separate validation split or the precise sizes/percentages of train/test/validation splits for each task.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as programming language versions or library versions (e.g., Python 3.x, PyTorch x.x).
Experiment Setup No The paper describes model architectures used (Res Net, Alex Net, UNet) and discusses task-based learning settings, but it does not explicitly provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) for the experimental setup.