Causal Navigation by Continuous-time Neural Networks

Authors: Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner, Daniela Rus

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from shortand long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail.
Researcher Affiliation Academia 1CSAIL MIT, 2IST Austria
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code and data are available at: https://github.com/mit-drl/deepdrone
Open Datasets Yes Code and data are available at: https://github.com/mit-drl/deepdrone
Dataset Splits No The paper mentions 'validation loss' and 'validation performance' in multiple tables but does not specify the dataset splits (e.g., percentages or counts) for training, validation, and test sets.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU/CPU models or cloud instance types.
Software Dependencies No The paper mentions software platforms like Microsoft Air Sim and Unreal Engine but does not provide specific version numbers for these or other software dependencies.
Experiment Setup No The paper refers to 'details in the supplements' for the experimental setup and mentions the Adam optimizer and cosine similarity loss but does not provide specific hyperparameter values or detailed training configurations within the main text.