On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning

Authors: Lorenzo Bonicelli, Matteo Boschini, Angelo Porrello, Concetto Spampinato, SIMONE CALDERARA

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By means of extensive experiments, we show that applying Li DER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training. 4 Experiments To assess our proposal, we perform a suite of experiments through the Mammoth framework [12, 17, 9, 6, 27, 13, 14, 50], an open-source codebase introduced in [15] for testing CL algorithms.
Researcher Affiliation Academia Lorenzo Bonicelli1 Matteo Boschini1 Angelo Porrello1 Concetto Spampinato2 Simone Calderara1 1AImage Lab University of Modena and Reggio Emilia 2Pe RCei Ve Lab University of Catania
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/aimagelab/Li DER.
Open Datasets Yes Split CIFAR-100. An initial evaluation is carried by splitting the 32 32 images from the 100 classes of CIFAR-100 [43] into 10 tasks. Split mini Image Net. This setting is designed to assess models on longer sequences of tasks. It splits the 100 classes of mini Image Net [76] a subset of Image Net, with images resized to 84 84 into 20 consecutive tasks. Split CUB-200. A final, more challenging benchmark involves classifying large-scale 224 224 images from the Caltech-UCSD Birds-200-2011 [79] dataset, organized in a stream of 10 20-fold classification tasks.
Dataset Splits No The paper describes how classes are split into sequential tasks but does not explicitly provide the training, validation, and test dataset splits (e.g., percentages or counts) for each task within the main text. It defers some experimental details to supplementary material.
Hardware Specification Yes Experiments were conducted on an on-premise desktop machine, equipped with a RTX 2080 consumer GPU.
Software Dependencies No The paper mentions the use of the 'Mammoth framework' but does not specify versions for programming languages (e.g., Python) or other software dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) or CUDA.
Experiment Setup No Due to space constraints, we kindly refer the reader to the supplementary material for additional experimental details (e.g., optimizer, hyperparameters, etc.).