THOMAS: Trajectory Heatmap Output with learned Multi-Agent Sampling

Authors: Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, Fabien Moutarde

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report our results on the Interaction multi-agent prediction challenge and rank 1st on the online test leaderboard. (Abstract) 4 EXPERIMENTS (Section title) 4.3 COMPARISON WITH STATE-OF-THE-ART (Section title) 4.4 ABLATION STUDIES (Section title)
Researcher Affiliation Collaboration 1Io V team, Paris Research Center, Huawei Technologies France 2Center for robotics, MINES Paris Tech
Pseudocode No The paper includes a detailed architecture diagram (Figure 8) but no explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The reproducibility statement mentions the public availability of the dataset but does not state that the code for the described methodology is open-source or provide a link to it.
Open Datasets Yes We use the publicly available Interaction 1.2 dataset (Zhan et al., 2019) available at http://challenge.interaction-dataset.com/dataset/download.
Dataset Splits Yes We use the training/validation split provided in Interaction 1.2.
Hardware Specification No The paper discusses training and inference times (e.g., 'Training 7.5 hours', 'Inference 20 ms') but does not specify the hardware used (e.g., GPU model, CPU type, or memory).
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'ReLU activation', and 'Layer Normalization', but does not specify version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes We train all models with Adam optimizer and batchsize 32. We initialize the learning rate at 1e 3 and divide it by 2 at epochs 3, 6, 9 and 13, before stopping the training at epoch 16. We use Re LU activation after every linear layer unless specified otherwise, and Layer Normalization after every attention and graph convolution layer.