Saliency-driven Experience Replay for Continual Learning

Authors: Giovanni Bellitto, Federica Proietto Salanitri, Matteo Pennisi, Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, SIMONE CALDERARA, Simone Palazzo, Concetto Spampinato

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results confirm that SER effectively enhances the performance (in some cases up to about twenty percent points) of state-of-the-art continual learning methods, both in class-incremental and task-incremental settings.
Researcher Affiliation Academia Giovanni Bellitto University of Catania giovanni.bellitto@unict.it Federica Proietto Salanitri University of Catania federica.proiettosalanitri@unict.it Matteo Pennisi University of Catania matteo.pennisi@phd.unict.it Matteo Boschini University of Modena and Reggio Emilia matteo.boschini@unimore.it Lorenzo Bonicelli University of Modena and Reggio Emilia lorenzo.bonicelli@unimore.it Angelo Porrello University of Modena and Reggio Emilia angelo.porrello@unimore.it Simone Calderara University of Modena and Reggio Emilia simone.calderara@unimore.it Simone Palazzo University of Catania simone.palazzo@unict.it Concetto Spampinato University of Catania concetto.spampinato@unict.it
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https: //github.com/perceivelab/SER.
Open Datasets Yes Split Mini-Image Net [66, 13, 21, 17] that includes 100 classes from Image Net, allowing for a longer task sequence. For each class, 500 images are used for training and 100 for evaluation. Split FG-Image Net1 [58] is a benchmark for fine-grained image classification that we use to test CL methods on a more challenging task than traditional ones.
Dataset Splits Yes For both datasets, images are resized to 288 384 pixels and split into twenty 5-way tasks.
Hardware Specification Yes All experiments were conducted on a workstation with an 24-core CPU, 500GB RAM, and an NVIDIA A100 GPU (40GB VRAM).
Software Dependencies No The paper mentions 'Mammoth framework [9]' and 'UNISAL[20]' but does not provide specific version numbers for these or other software components.
Experiment Setup Yes In compliance with online learning, all models are trained for a single epoch, using SGD as optimizer, with a fixed batch size of 8 both for the input stream and the replay buffer. Rehearsal methods are evaluated with three different sizes of the memory buffer (1000, 2000 and 5000).