ELMA: Energy-Based Learning for Multi-Agent Activity Forecasting
Authors: Yuke Li, Pin Wang, Lixiong Chen, Zheng Wang, Ching-Yao Chan1482-1490
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on two large-scale datasets prove that ELMA outperforms recent leading studies by an obvious margin. |
| Researcher Affiliation | Academia | Yu-Ke Li1, Pin Wang1, Li-Xiong Chen2, Zheng Wang3*, Ching-Yao Chan1* 1California PATH, UC Berkeley, 2Department of Engineering Science, University of Oxford, 3School of Computer Science, Wuhan University, |
| Pseudocode | No | The paper describes the algorithmic steps and processes like Langevin dynamics and MCMC but does not present them in a formalized pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any statement about making the source code available or provide a link to a code repository. |
| Open Datasets | Yes | Two large-scale datasets, Activities in Extended Videos (Act EV/VIRAT) (Awad et al. 2018) benchmark and TITAN (Malla, Dariush, and Choi 2020), are used to assess the performance of ELMA. |
| Dataset Splits | Yes | TITAN contains 400 videos for training, 200 videos for validation, and 100 videos for test. |
| Hardware Specification | Yes | Our implementation uses Py Torch. The experiments are executed on four Nvidia Ge Force TITAN XPs, with 48 GB of memory in total. |
| Software Dependencies | No | The paper states 'Our implementation uses Py Torch' and mentions 'RMSProp optimizer' but does not specify version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | In our experiments, RMSProp optimizer (Goodfellow, Bengio, and Courville 2016) are employed with the learning rate initialized at 8 × 10−5. Our implementation uses Py Torch. The experiments are executed on four Nvidia Ge Force TITAN XPs, with 48 GB of memory in total. We observe his/her past 8 steps and forecast the activities of the subsequent 12 steps. Two stacked AG-CLSTM layers with 512 channels are leveraged to calculate Eq. 10. In practice, we consider building our graph with 50 nodes for the experiments. |