Linear Disentangled Representations and Unsupervised Action Estimation
Authors: Matthew Painter, Adam Prugel-Bennett, Jonathon Hare
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section empirically explores admission of irreducible representations in latent spaces. In particular we look at standard VAE baselines and search for known cyclic symmetry structure. All experiments in this and later sections report errors as one standard deviation over 3 runs, using randomly selected validation splits of 10%. |
| Researcher Affiliation | Academia | Matthew Painter, Jonathon Hare, Adam Prügel-Bennett Department of Electronics and Computer Science Southampton University {mp2u16, jsh2, apb}@ecs.soton.ac.uk |
| Pseudocode | No | The paper describes methods and processes but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | All experimental details are reported in supplementary material section A and code is available at https://github.com/Matt Painter01/Unsupervised Action Estimation. |
| Open Datasets | Yes | Problem Setting We shall use the Flatland problem [Caselles-Dupré et al., 2018] for consistency with Forward VAE, a grid world... d Sprites Table 3b reports reconstruction errors on d Sprites, with symmetries in translation, scale and rotation leading to symmetry structure G = C3 C10 C8 C8. |
| Dataset Splits | Yes | All experiments in this and later sections report errors as one standard deviation over 3 runs, using randomly selected validation splits of 10%. |
| Hardware Specification | No | The authors acknowledge the use of 'the IRIDIS High Performance Computing Facility' but do not provide specific details on the GPU or CPU models, memory, or other hardware components used for their experiments. |
| Software Dependencies | No | The paper mentions software like 'Py Torch' and 'Py Game framework' but does not provide specific version numbers for these or any other ancillary software components, which are necessary for reproducibility. |
| Experiment Setup | Yes | All experiments in this and later sections report errors as one standard deviation over 3 runs, using randomly selected validation splits of 10%. We also found low learning rates for the VAE and policy network (lr 10 4) with high learning rate for the internal RGr VAE matrix representations (lr 10 2) to be very beneficial for consistent convergence (regardless of task) and generally prevented convergence to suboptimal minima. |