Lifelong Variational Autoencoder via Online Adversarial Expansion Strategy

Authors: Fei Ye, Adrian G. Bors

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that OAES can significantly improve the performance of VAE-DEM for TFCL with a minimum number of components.
Researcher Affiliation Academia Department of Computer Science, University of York, York YO10 5GH, UK fy689@york.ac.uk, adrian.bors@york.ac.uk
Pseudocode Yes In this section, we provide the algorithm implementation of OAES (See the pseudocode in Appendix-A from SM1), which is summarized into three stages :
Open Source Code Yes Supplementary materials (SM) and source code are available1. 1https://github.com/dtuzi123/OAES
Open Datasets Yes Datasets. For the generative modelling task, we have the following datasets : (1) Split MNIST/Fashion. We split MNIST/Fashion (Le Cun et al. 1998) into ten parts according to the class. (2) Split MNIST-Fashion. We combine Split MNIST and Split Fashion in a class-incremental manner; (3) Cross-Domain. We consider to combine Split MNISTFashion and OMNIGLOT (Lake, Salakhutdinov, and Tenenbaum 2015). We adopt Split MNIST, Split CIFAR10 and Split CIFAR100 from (De Lange and Tuytelaars 2021) for the classification tasks.
Dataset Splits No The paper mentions using a 'testing set' but does not explicitly provide details about training/validation/test dataset splits (e.g., percentages, counts, or a citation to a predefined split methodology).
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes At the i-th training step (Ti), the model only accesses a data batch Bi DS k with the batch size of 10. The threshold β for Split MNIST, Split Fashion, Split MNISTFashion and Cross-domain is 4.2, 3, 4 and 4.2, respectively. For testing the generative modelling task, we estimate the sample log-likelihood (Log) by using IWVAE bound (Burda, Grosse, and Salakhutdinov 2015), considering 1000 importance samples. The maximum number of components for various models is set to 30 to avoid memory overload. The memory buffer size is 1000 for Split M-S and Split M-C.