Online Continual Learning from Imbalanced Data
Authors: Aristotelis Chrysakis, Marie-Francine Moens
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we aim to compare CBRS to the state of the art when it comes to memory population strategies under online CL scenarios. We exclusively consider replay-based training as it is the only one that is suitable to our setting. |
| Researcher Affiliation | Academia | 1Department of Computer Science, KU Leuven, Leuven, Belgium. Correspondence to: Aristotelis Chrysakis <aristotelis.chrysakis@kuleuven.be>. |
| Pseudocode | Yes | We sketch the pseudocode for the CBRS algorithm in Algorithm 1. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the methodology described. |
| Open Datasets | Yes | Following Aljundi et al. (2019b), we select MNIST (Le Cun et al., 2010) and CIFAR-10 (Krizhevsky, 2012) for our experiments. In addition, we use Fashion MNIST (Xiao et al., 2017) and CIFAR-100 (Krizhevsky, 2012). All four of the datasets used in this work are freely available online. |
| Dataset Splits | No | The paper states 'All datasets are used as split benchmarks' and mentions evaluating on 'the standard test set of each selected dataset', but does not explicitly provide percentages or counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | Yes | All experiments are run on an NVIDIA TITAN Xp. |
| Software Dependencies | No | The paper mentions following pseudocode for GSS and optimizing other algorithms but does not provide specific software names with version numbers for libraries or frameworks used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We use a learning rate of 0.05 when training the MLP and 0.01 when training the Res Net-18. ... we set the batch size at b = 10, ... and we perform nb = 5 update steps per incoming batch... |