Probabilistic Recurrent State-Space Models

Authors: Andreas Doerr, Christian Daniel, Martin Schiegg, Nguyen-Tuong Duy, Stefan Schaal, Marc Toussaint, Trimpe Sebastian

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed PR-SSM is evaluated on a set of real-world benchmark datasets in comparison to state-of-the-art probabilistic model learning methods. Scalability and robustness are demonstrated on a high dimensional problem.
Researcher Affiliation Collaboration 1Bosch Center for Artificial Intelligence, Renningen, Germany. 2Max Planck Institute for Intelligent Systems, Stuttgart/T ubingen, Germany. 3University of Southern California, Los Angeles, USA. 4Machine Learning and Robotics Lab, University of Stuttgart, Germany.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1Code available at: https://github.com/boschresearch/PR-SSM .
Open Datasets Yes The performance of PR-SSM is assessed in comparison to state-of-the-art model learning methods on several real-world datasets as previously utilized by (Mattos et al., 2015). and Table 1 lists the datasets: ACTUATOR, BALLBEAM, DRIVES, FURNACE, DRYER, SARCOS.
Dataset Splits No The paper refers to 'test dataset' but does not specify the train/validation/test dataset splits (percentages or counts) needed to reproduce the experiment.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes In the experiments, 50 latent state samples were employed (details in the supplementary material).