Learning Dynamic Latent Spaces for Lifelong Generative Modelling
Authors: Fei Ye, Adrian G. Bors
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform several experiments which show that ORVAE achieves state-of-the-art results under TFCL. ... The results for the density estimation task are shown in Table 1 ... From Table 2, the results for the Fr echet Inception Distance (FID) (Heusel et al. 2017) and Inception Score (IS) (Salimans et al. 2016) indicate that ORVAE outperforms other models in reconstruction quality. ... We study the performance of ORVAE when changing the memory size. We train ORVAE when considering 192, 256, 320, 384, 700, and 1024 samples in the memory, on Split MNIST, and the results are reported in Fig. 2. |
| Researcher Affiliation | Academia | Fei Ye and Adrian G. Bors Department of Computer Science, University of York, York YO10 5GH, UK fy689@york.ac.uk, adrian.bors@york.ac.uk |
| Pseudocode | No | The paper describes the learning algorithm in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Supplementary materials (SM) and source code are available1. 1https://github.com/dtuzi123/ORVAE |
| Open Datasets | Yes | Datasets and evaluation criteria : We consider MNIST (Le Cun et al. 1998), Fashion (Xiao, Rasul, and Vollgraf 2017) and OMNIGLOT (Lake, Salakhutdinov, and Tenenbaum 2015) datasets for the density estimation task. ... We evaluate the generative ability of various models for CIFAR10 (Krizhevsky and Hinton 2009) and Tiny-Image Net (Le and Yang 2015) datasets. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits or refer to predefined validation splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | The threshold λ, controlling the size of the architecture in Eq. (12), is set to 30 and 40 for Split MNIST and Split Fashion, respectively. The maximum number of samples in the memory is set to 512. ... All models use ELBO with a small weight of 0.01 for the KL divergence term to avoid over-regularisation (Ye and Bors 2022c). |