BooVAE: Boosting Approach for Continual Learning of VAE
Authors: Evgenii Egorov, Anna Kuzina, Evgeny Burnaev
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically validate the proposed algorithm on commonly used benchmarks (MNIST, Fashion-MNIST, Not MNIST) and Celeb A for disjoint sequential image generation tasks. |
| Researcher Affiliation | Academia | Evgenii Egorov University of Amsterdam egorov.evgenyy@ya.ru Anna Kuzina Vrije Universiteit av.kuzina@yandex.ru Evgeny Burnaev Skoltech, AIRI e.burnaev@skoltech.ru |
| Pseudocode | Yes | Algorithm 1 Boo VAE algorithm |
| Open Source Code | Yes | We provide code at https://github.com/AKuzina/BooVAE. |
| Open Datasets | Yes | We perform experiments on MNIST, not MNIST, fashion MNIST and Celeb A datasets. |
| Dataset Splits | No | The paper mentions using a 'test dataset' and a 'training dataset' for experiments but does not explicitly provide specific percentages or counts for training, validation, and test splits. |
| Hardware Specification | Yes | In Supp.(B.5) we mention that we use 4 NVIDIA V100 GPU for each experiment. |
| Software Dependencies | No | The paper mentions PyTorch in relation to Inception V3 network but does not explicitly list other software dependencies with specific version numbers in the main text. |
| Experiment Setup | Yes | For each task, we add a new classification head (one fully connected layer) and train for 200 epochs with batch size 500. |