Graph Switching Dynamical Systems

Authors: Yongtuo Liu, Sara Magliacane, Miltiadis Kofinas, Efstratios Gavves

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that GRASS can consistently outperforms previous state-of-the-art methods. ... We introduce two new datasets for this setting, a synthesized ODE-driven particles dataset and a realworld Salsa Couple Dancing dataset. Experiments show that GRASS can consistently outperforms previous state-of-the-art methods.
Researcher Affiliation Collaboration 1University of Amsterdam 2MIT-IBM Watson AI Lab.
Pseudocode Yes Algorithm 1 Inference algorithm for GRASS. Input: Time series y1:T , interaction edge prior distribution p(e1:T ) Output: Learned parameters ϕ and θ.
Open Source Code Yes The code and datasets are available at https://github.com/yongtuoliu/Graph-Switching Dynamical-Systems..
Open Datasets Yes The code and datasets are available at https://github.com/yongtuoliu/Graph-Switching Dynamical-Systems.. ... To evaluate the proposed methods and compare against baselines, we introduce two datasets for benchmarking, inspired by the single-object literature: the synthesized ODE-driven particle dataset, and the Salsa Couple dancing dataset.
Dataset Splits Yes We follow the sample splitting proportion of synthesized datasets in REDSDS (Ansari et al., 2021) (i.e. test data is around 5% of training data) and create 4,928 samples for training, 191 samples for validation, and 204 samples for testing.
Hardware Specification Yes Each experiment is running on one Nvidia GeForce RTX 3090 GPU.
Software Dependencies No The paper describes neural network architectures (e.g., bi GRU, MLP, RNN) but does not provide specific version numbers for software dependencies such as programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries.
Experiment Setup Yes We train both datasets with a fixed batch size of 20 for 60,000 training steps. We use the Adam optimizer with 10^-5 weight-decay and clip gradients norm to 10. The learning rate is warmed up linearly from 5*10^-5 to 2*10^-4 for the first 2,000 steps, and then decays following a cosine manner with a rate of 0.99.