Online Relational Inference for Evolving Multi-agent Interacting Systems

Authors: Beomseok Kang, Priyabrata Saha, Sudarshan Sharma, Biswadeep Chakraborty, Saibal Mukhopadhyay

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic datasets and realworld data (CMU Mo Cap for human motion) demonstrate that ORI significantly improves the accuracy and adaptability of relational inference in dynamic settings compared to existing methods. ... We experimented with three common benchmarks, synthetic springs and charged systems, and CMU Mo Cap (human motion) [22].
Researcher Affiliation Academia Beomseok Kang, Priyabrata Saha, Sudarshan Sharma, Biswadeep Chakraborty, Saibal Mukhopadhyay Georgia Institute of Technology {beomseok,smukhopadhyay6}@gatech.edu
Pseudocode No The paper describes the training procedure and its components using text and mathematical equations, but it does not include a formal pseudocode block or algorithm listing.
Open Source Code Yes Code is available at https://github.com/beomseokg/ORI.
Open Datasets Yes We experimented with three common benchmarks, synthetic springs and charged systems, and CMU Mo Cap (human motion) [22]. ... CMU. Carnegie-mellon motion capture database, 2003. URL http://mocap.cs.cmu.edu.
Dataset Splits No First, the key difference in our training setup compared to the offline learning setup is that both batch and epoch are 1 (i.e., streaming data). There is no separate validation or test dataset in online learning.
Hardware Specification Yes We trained the models with a single GTX 2080Ti GPU.
Software Dependencies No The paper mentions using implemented decoders from NRI [11] and MPM [23] but does not specify version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes For the synthetic datasets, the model observed the first 30 time steps (xt 30:t) and then predict the next 30 time steps (xt:t+30). ... The hidden size on the decoders are set to 256 as default. The learning rate for the decoders is 1e-4. The initial learning rates (η(0)) for the adjacency matrix is 100 for ORI with NRI decoders and 20 for ORI with MPM decoders. The lower and upper bound for learning rates in Ada Relation are ηmin = 100 ηmax = 200 for ORI with NRI decoders, ηmin = 20 ηmax = 50 for ORI with MPM decoders. The threshold ϵ and adaptation step size α in Ada Relation are set to 0.05 and 1.