Interaction Modeling with Multiplex Attention
Authors: Fan-Yun Sun, Isaac Kauvar, Ruohan Zhang, Jiachen Li, Mykel J Kochenderfer, Jiajun Wu, Nick Haber
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our approach outperforms state-of-the-art models in trajectory forecasting and relation inference, spanning three multi-agent scenarios: social navigation, cooperative task achievement, and team sports. We conduct experiments in a range of social multi-agent environments inspired by real-world scenarios: social navigation, cooperative task achievement, and team sports. We evaluate models on trajectory prediction, we analyze the relations inferred by our model, and we explore the important role these inferred relations play in making accurate predictions. |
| Researcher Affiliation | Academia | Fan-Yun Sun Stanford University Isaac Kauvar Stanford University Ruohan Zhang Stanford University Jiachen Li Stanford University Mykel Kochenderfer Stanford University Jiajun Wu Stanford University Nick Haber Stanford University |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper provides a project website link: 'https://cs.stanford.edu/ sunfanyun/imma/' but does not explicitly state that the source code for the methodology is available there, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | To answer Q1, we test our approach in three multi-agent environments that each exhibited multiple types of social interaction: our simulated Social Navigation Environment, PHASE [30], and the NBA dataset (used in [22, 27, 49, 47, 15]). |
| Dataset Splits | No | For Social Navigation we use 100k multi-agent trajectories (in total for training, validation, and testing), for NBA dataset we use 300k multi-agent trajectories, and for PHASE we use 836 multi-agent trajectories. However, specific percentages or counts for training, validation, and test splits are not provided. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions PyTorch in its references [31] but does not specify its version number or any other software dependencies with version numbers used for the experiments. |
| Experiment Setup | No | The paper's 'Experiment Setup' section describes the environments and baselines, and states 'More details are in the Appendix', but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main text. |