A Diffusion-Model of Joint Interactive Navigation
Authors: Matthew Niedoba, Jonathan Lavington, Yunpeng Liu, Vasileios Lioutas, Justice Sefas, Xiaoxuan Liang, Dylan Green, Setareh Dabiri, Berend Zwartsenberg, Adam Scibior, Frank Wood
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the quality of sampled trajectories with both joint and ego-only motion forecasting on the Argoverse [4] and INTERACTION [53] datasets. We report excellent ego-only motion forecasting and outperform Scene Transformer on joint motion forecasting metrics. In addition, we demonstrate how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions including goal-based sampling, behavior-class sampling, and scenario editing. |
| Researcher Affiliation | Collaboration | 1 University of British Columbia, 2 Inverted AI |
| Pseudocode | No | No pseudocode or algorithm blocks are provided in the paper. |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that code will be made available. |
| Open Datasets | Yes | We evaluate the quality of sampled trajectories with both joint and ego-only motion forecasting on the Argoverse [4] and INTERACTION [53] datasets. |
| Dataset Splits | No | The paper mentions using the "Argoverse validation set" in Table 2, but it does not specify the training, validation, and test dataset splits (e.g., percentages, sample counts, or the methodology for creating these splits). |
| Hardware Specification | Yes | We train DJINN on two A100 GPUs for 150 epochs. [...] Runtimes are measured across 1000 samples on a Ge Force RTX 2070 Mobile GPU. |
| Software Dependencies | No | The paper mentions using the "Adam optimizer" and the "Heun 2nd order sampler from [21]", but does not provide specific version numbers for these or any other key software components or libraries. |
| Experiment Setup | Yes | Training hyperparameters for both models are found in Appendix A. We train DJINN on two A100 GPUs for 150 epochs. We utilize the Adam optimizer with learning rate of 3E-4 and default values for β1 and β2. We use a linear learning rate ramp up, scaling from 0 to 3E-4 over 0.1 epochs. We set the batch size to 32. We clip gradients to a maximum norm of 5. |