Learning Disentangled Representations and Group Structure of Dynamical Environments
Authors: Robin Quessard, Thomas Barrett, William Clements
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we experimentally evaluate our method for learning disentangled representations in several explicitly symmetric environments with different types of symmetries. |
| Researcher Affiliation | Collaboration | Robin Quessard1,2 Thomas D. Barrett3 William R. Clements1 1Indust.ai, Paris, France 2École Normale Supérieure, Paris, France 3University of Oxford, Oxford, UK |
| Pseudocode | Yes | (see supplementary information for pseudo-code) |
| Open Source Code | Yes | The code to reproduce these experiments is provided in notebook form at https://github.com/IndustAI/learning-group-structure. |
| Open Datasets | No | The paper describes generating data from environments like Flatland (Caselles-Dupré et al. (2018)) and a 3D teapot (Crow (1987)) using random policies, but does not provide concrete access information (link, DOI, repository, or formal citation of the dataset itself) for a publicly available or open dataset of observations. |
| Dataset Splits | No | The paper does not provide specific training/validation/test dataset split information (e.g., percentages, sample counts, or citations to predefined splits). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes general model components (e.g., CNNs, FCNs, loss functions) but lacks specific experimental setup details such as concrete hyperparameter values (learning rate, batch size, number of epochs) or optimizer settings. |