Label Delay in Online Continual Learning

Authors: Botos Csaba, Wenxuan Zhang, Matthias Müller, Ser Nam Lim, Philip Torr, Adel Bibi

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our extensive experiments amounting to 25000 GPU hours, we show that merely increasing the computational resources is insufficient to tackle this challenge.
Researcher Affiliation Collaboration 1University of Oxford 2King Abdullah University of Science and Technology 3Intel Labs 4University of Central Florida
Pseudocode Yes Algorithm 1 Single OCL time step with Label Delay
Open Source Code Yes The implementation for reproducing our experiments can be found at https://github.com/botcs/label-delay-exp.
Open Datasets Yes We conduct our experiments on four large-scale online continual learning datasets, Continual Localization (CLOC) [4], Continual Google Landmarks (CGLM) [5], Functional Map of the World (FMo W) [6], and Yearbook [7].
Dataset Splits Yes We follow the same training and validation set split of CLOC as in [4] and o CGLM as in [5] and the official released splits for FMo W [6] and Yearbook [7].
Hardware Specification Yes Most of the experiments are using a single A100 GPU with 12 CPU.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes We use SGD with the learning rate of 0.005, momentum of 0.9, and weight decay of 10−5. We apply random cropping and resizing to the images, such that the resulting input has a resolution of 224 × 224.