MIND: Multi-Task Incremental Network Distillation

Authors: Jacopo Bonato, Francesco Pelosin, Luigi Sabetta, Alessandro Nicolosi

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For our experiments, we consider 4 datasets in the standard class-incremental (CI) learning scenario with all classes equally split among 10 tasks... We report the task agnostic (no task-label) accuracy over all the classes of the dataset after training the last task: ACCT AG = ... We also report the task aware setting... We run the experiments on a machine equipped with: GPU NVIDIA Ge Force RTX 3080... Ablation Studies Through the following ablation studies, we investigate the effects and contributions of the different components of MIND.
Researcher Affiliation Industry Jacopo Bonato*1, Francesco Pelosin*1 2, Luigi Sabetta*1, Alessandro Nicolosi 1 1Leonardo Labs, Rome, Italy 2Covision Lab, Brixen South-Tyrol, Italy jacopo.bonato.ext@leonardo.com, francesco.pelosin@covisionlab.com, luigi.sabetta.ext@leonardo.com, alessandro.nicolosi@leonardo.com
Pseudocode No The paper describes methods textually and with diagrams (e.g., Figure 1, 2, 3) but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes We make code and experiments available at https://github.com/Lsabetta/MIND.
Open Datasets Yes For our experiments, we consider 4 datasets... CIFAR100/10 (Krizhevsky, Hinton et al. 2009) ... Tiny Image Net/10 (Chaudhry et al. 2019) ... Core50/10 (Lomonaco and Maltoni 2017) ... Synbols/10 (Lacoste et al. 2020)
Dataset Splits No The paper describes splitting data into tasks and mentions hyperparameter optimization but does not explicitly specify standard training, validation, and test splits for the datasets (e.g., percentages or sample counts for each partition).
Hardware Specification Yes We run the experiments on a machine equipped with: GPU NVIDIA Ge Force RTX 3080, 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz processor, and 32 GB of RAM.
Software Dependencies No The paper mentions using 'Avalanche (Lomonaco et al. 2021) framework and FACIL framework (Masana et al. 2023)' but does not provide specific version numbers for these frameworks or any other software dependencies.
Experiment Setup Yes The dimension of the embeddings is set to D = 64 as all the competitors. ... To ensure coherence, we opted for a value of 5 for all other experiments, as it delivers the best performance when evaluated on CIFAR100/10.