PRANK: motion Prediction based on RANKing

Authors: Yuriy Biktairov, Maxim Stebelev, Irina Rudenko, Oleh Shliazhko, Boris Yangel

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate PRANK on the in-house and Argoverse datasets, where it shows competitive results. and In this section we describe the experimental setup, such as datasets, metrics, neural network architectures and training parameters. We then present an experimental evaluation of the proposed method, including an ablation study.
Researcher Affiliation Collaboration Yandex Self-Driving Group {ybiktairov, mstebelev, irina-rud, olmer, hr0nix}@yandex-team.ru Skolkovo Institute of Science and Technology Moscow Institute of Physics and Technology, yuriy.biktairov@phystech.edu
Pseudocode No The paper describes the PRANK approach and its training/inference processes in detail, including mathematical formulations, but it does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We present results on two datasets: the publicly available Argoverse dataset [25] and a much larger in-house dataset.
Dataset Splits Yes It contains about 1591K scenes for training, 11K for validation and 120K for test. and It consists of 333K scenes split into 211K scenes for training, 41K for validation and 80K for test.
Hardware Specification Yes We use the Adam optimizer [27] with the batch size of 128 split between 4 Ge Force 1080 GTX GPUs. and The proposed method takes about 200ms to produce predictions for 5 agents when running on Ge Force RTX 2080 Ti and a single core of a modern CPU.
Software Dependencies No The paper mentions using the 'Faiss [23] library' but does not specify its version number or the versions of any other software dependencies such as programming languages or frameworks.
Experiment Setup Yes We use the Adam optimizer [27] with the batch size of 128 split between 4 Ge Force 1080 GTX GPUs. The learning rate starts at 5e-4 and is reduced by half every time there is no improvement in validation loss for 5 pseudo-epochs of 2000 batches each (1000 for Argoverse).