Online Learned Continual Compression with Adaptive Quantization Modules

Authors: Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Joelle Pineau

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 4.1 we present results on standard supervised continual learning benchmarks on CIFAR-10. In Section 4.2 we evaluate other downstream tasks such as standard iid training applied on the storage at the end of online continual compression. For this evaluation we consider larger images from Imagenet, as well as on lidar data. Finally we apply AQM on observations of an agent in an RL environment. Evaluations are shown in Table 1.
Researcher Affiliation Collaboration 1Mc Gill 2Mila 3Facebook AI Research 4University of Montreal 5Element AI.
Pseudocode Yes Algorithm 1: AQM LEARNING WITH SELF-REPLAY, Algorithm 2: ADAPTIVECOMPRESS, Algorithm 3: Update Buffer Rep
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In Section 4.1 we present results on standard supervised continual learning benchmarks on CIFAR-10. In Section 4.2 we evaluate other downstream tasks such as standard iid training applied on the storage at the end of online continual compression. For this evaluation we consider larger images from Imagenet, as well as on lidar data. Finally we apply AQM on observations of an agent in an RL environment. We proceed to train AQM on the Kitti Dataset (Geiger et al., 2013).
Dataset Splits Yes We evaluate with the standard CIFAR-10 split (Aljundi et al., 2018), where 5 tasks are presented sequentially, each adding two new classes.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) required to replicate the experiments.
Experiment Setup No The paper does not provide specific hyperparameter values (e.g., learning rate, batch size, optimizer settings) or detailed training configurations for their proposed method. It mentions general settings like "We use 5 epochs in all the experiments for this baseline." but lacks concrete details for their own models.