Neural Relational Inference with Node-Specific Information

Authors: Ershad Banijamali

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiment results over real-world datasets validate the merit of our proposed algorithm.
Researcher Affiliation Industry Ershad Banijamali Amazon Alexa AI Toronto, Canada ebanijam@amazon.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links to open-source code for the described methodology or state that code is released.
Open Datasets Yes We perform our experiments on two different tasks, i.e. goal-conditional prediction and actionconditional prediction. For both tasks we consider a multi-agent system in which at least one agent has access to its individualized features. Both of these tasks are of great interest in the context of trajectory prediction, with important downstream applications such as planning. For the goalconditional task the individualized feature is the final goal (position) of the agent. Therefore, this information is fixed for the whole prediction horizon or at least for multiple time steps, ct:t+l i = gt i for l > 1. For the action-conditional task the individualized feature is the next action of the agent, which changes at every time step, ct i = ut i. We refer to our model as NRI-NSI. In all of our experiment we use ADAM optimizer (Kingma & Ba, 2015) with learning rate 0.0001.
Dataset Splits Yes For the NGSIM I-80 the training and test data are split according to the preprocessing of Henaff et al. (2019). The training data is divided to 80% training and 20% validation. For the Basketball dataset the data is divided tof 65% training, 10% validation, and 25% test set. For the nu Scence dataset the training set is divided to 80% training and 20% validation.
Hardware Specification Yes Type of GPU: single TITAN X GPU.
Software Dependencies No The paper mentions software components and techniques like 'MLP', 'LSTM', 'GNN', 'CNN', 'ADAM optimizer', 'ELU activation', 'softmax', 'Gumbel distribution', and 'batch normalization', but does not specify any version numbers for these or other software packages.
Experiment Setup Yes In all of our experiment we use ADAM optimizer (Kingma & Ba, 2015) with learning rate 0.0001. ... teacher forcing and feed the model with ground truth (instead of previous predictions) for the first 10 time steps. ... Parameter of the Gumbel distribution τ = 0.5 Percentage of samples from the pθ(z|x, c) model: α = 10% batch size: 128 Number of epochs: 50 for the NGSIM I-80 dataset 500 for the Basketball dataset 50 for the nu Scences dataset