Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Latent Bernoulli Autoencoder
Authors: Jiri Fajtl, Vasileios Argyriou, Dorothy Monekosso, Paolo Remagnino
ICML 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method performs on a par or better than the current state-of-the-art methods on common Celeb A, CIFAR-10 and MNIST datasets. |
| Researcher Affiliation | Academia | 1Kingston University, London, UK 2Leeds Beckett University, Leeds, UK. |
| Pseudocode | Yes | Algorithm 1 Latent s to hyperplane normal r inversion |
| Open Source Code | Yes | Py Torch code and trained models are publicly available on github1. 1https://github.com/ok1zjf/lbae |
| Open Datasets | Yes | We trained and tested our model on the Celeb A (Liu et al., 2015), CIFAR10 (Krizhevsky & Hinton, 2009) and MNIST (Le Cun et al., 2010) datasets |
| Dataset Splits | No | with the default train/test splits |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'Py Torch code' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The model was trained with ADAM(Kingma & Ba, 2015) with learning rate 10 3, no weight decay and 512 batch size. Mean squared error is used as the reconstruction loss except for MNIST where we use the binary cross entropy. Table 3 specifies epochs: MNIST 2000, CIFAR-10 2000, CELEBA 500. |