Relational State-Space Model for Stochastic Multi-Object Systems
Authors: Fan Yang, Ling Chen, Fan Zhou, Yusong Gao, Wei Cao
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The utility of R-SSM is empirically evaluated on synthetic and real time series datasets. Our experiments on synthetic and real-world time series datasets show that R-SSM achieves competitive test likelihood and good prediction performance in comparison to GNN-based AR models and other sequential LVMs. |
| Researcher Affiliation | Collaboration | College of Computer Science and Technology, Zhejiang University, Hangzhou, China Alibaba Group, Hangzhou, China |
| Pseudocode | Yes | Algorithm 1 Estimate the VSMC bound LSMC K |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code or a link to a code repository. |
| Open Datasets | Yes | The METR-LA dataset (Li et al., 2018) contains 4 months of 1D traffic speed measurements that were recorded via 207 sensors and aggregated into 5 minutes windows. The dataset2 includes 107,146 training examples and 13,845 test examples, each of which contains the 2D trajectories of ten players and the ball recorded at 6Hz for 50 time steps. |
| Dataset Splits | Yes | we generate 10K examples for training, validation, and test, respectively. we train our model on small time windows spanning 2 hours and use a 7:1:2 split for training, validation, and test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions "We implement R-SSM using the Tensor Flow Probability library (Dillon et al., 2017)", but it does not specify a version number for this library, nor does it list multiple versioned software components required for replication. |
| Experiment Setup | Yes | We use the Adam optimizer with an initial learning rate of 0.001 and a gradient clipping of 1.0 for all experiments. The learning rate was annealed according to a linear cosine decay. We set β1 = β2 = 1.0 for the auxiliary losses in all experiments. 4 SMC samples and a batch size of 16 are used in training (Synthetic toy dataset). 4 SMC samples and a batch size of 64 are used in training (Basketball player movement). 3 SMC samples and a batch size of 16 are used in training (Road traffic). |