Online Bias Correction for Task-Free Continual Learning

Authors: Aristotelis Chrysakis, Marie-Francine Moens

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of OBC extensively, and we show that it significantly improves a number of task-free continual learning methods, over multiple datasets (Section 4).
Researcher Affiliation Academia Aristotelis Chrysakis & Marie-Francine Moens Department of Computer Science KU Leuven Leuven, Belgium
Pseudocode Yes Algorithm 1 Online Bias Correction
Open Source Code Yes Our code can be found at https://github.com/chrysakis/OBC.
Open Datasets Yes The Fashion MNIST dataset (Xiao et al., 2017) contains 60,000 grayscale images of clothing items split in 10 classes. CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) each contain 50,000 color images... Finally, tiny Image Net (Le & Yang, 2015) contains 100,000 color images...
Dataset Splits Yes As Lomonaco & Maltoni (2017) suggest, sessions 3, 7, and 10 are used for evaluation purposes (approximately 45,000 images), and the remaining 8 sessions are used to construct the stream (approximately 120,000 images).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using specific architectures like Res Net-18 and CNN, implying the use of machine learning frameworks (e.g., PyTorch), but it does not provide specific version numbers for any software dependencies.
Experiment Setup Yes we use a learning rate of 0.1 when using the reduced Res Net-18 architecture. When using the simpler CNN, we use a learning rate of 0.03. The stream and replay batch sizes were both set to 10... The batch size of OBC was set to 50... and a label-smoothing factor of 0.5