Predicting Physical World Destinations for Commands Given to Self-Driving Cars
Authors: Dusan Grujicic, Thierry Deruyttere, Marie-Francine Moens, Matthew B. Blaschko715-725
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose an extension in which we annotate the 3D destination that the car needs to reach after executing the given command and evaluate multiple different baselines on predicting this destination location. Additionally, we introduce a model that outperforms the prior works adapted for this particular setting. |
| Researcher Affiliation | Academia | 1 Department of Computer Science, KU Leuven, Belgium 2 Department of Electrical Engineering, KU Leuven, Belgium {dusan.grujicic, thierry.deruyttere, sien.moens, matthew.blaschko}@kuleuven.be |
| Pseudocode | No | The paper describes the proposed method (PDPC) in text within Section 4.4 but does not include any formal pseudocode block or algorithm listing. |
| Open Source Code | No | The paper provides a link for the dataset: 'The Talk2Car-Destination dataset is available at https: //github.com/Thierry Deruyttere/Talk2Car-Destination.' However, it does not provide a link or explicit statement about the availability of the source code for the proposed method (PDPC). |
| Open Datasets | Yes | The Talk2Car-Destination dataset is available at https: //github.com/Thierry Deruyttere/Talk2Car-Destination. |
| Dataset Splits | Yes | Talk2Car adds unrestricted natural language commands that refer to a specific object in a street setting, and consists of 8349, 1163, and 2447 samples in the training, validation, and test sets. ... This process resulted in 8301, 1159, and 2439 samples in the train, validation, and test set. |
| Hardware Specification | Yes | We also wish to thank NVIDIA for providing us with two RTX Titan XPs. |
| Software Dependencies | No | The paper mentions several software components like FCOS3D, Sentence-BERT, MMDetection3D, ResNet18, FlowNet, etc., but does not provide specific version numbers for these or the underlying frameworks (e.g., PyTorch, TensorFlow) used for implementation. |
| Experiment Setup | No | The paper states: 'We use the default parameters provided by the authors' for FCOS3D, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed training configurations for their proposed PDPC model in the main text. |