Learning Plannable Representations with Causal InfoGAN
Authors: Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart J. Russell, Pieter Abbeel
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We begin our investigation with a set of toy tasks, specifically designed to demonstrate the benefits of CIGAN, where we can also perform an extensive quantitative evaluation. We later present experiments on a real dataset of robotic rope manipulation. |
| Researcher Affiliation | Academia | 1Berkeley AI Research, University of California, Berkeley 2Department of Physics, University of Chicago |
| Pseudocode | No | The paper mentions "we provide a detailed algorithm in Appendix C," but no pseudocode or algorithm blocks are present in the main body of the paper. |
| Open Source Code | Yes | Code is available online at http://github.com/thanard/causal-infogan. |
| Open Datasets | Yes | We later present experiments on a real image data collected by Nair et al. [29] of a robot randomly poking a rope. ... We train a CIGAN model on the rope manipulation data of [29]. |
| Dataset Splits | No | The paper states: "We chose algorithm parameters and stopping criteria by measuring the average feasibility score on a validation set of start/goal observations", but it does not specify the size, percentages, or method for creating this validation set split from the full dataset. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., programming languages, libraries, frameworks, or solvers). |
| Experiment Setup | No | The paper describes the models and general experimental procedure but does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or system-level training configurations in the main text. |