Learning to Follow Directions in Street View
Authors: Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, Raia Hadsell11773-11781
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We describe a number of approaches that establish strong baselines on this problem. |
| Researcher Affiliation | Industry | Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andr as B anki-Horv ath, Keith Anderson, Raia Hadsell Deep Mind |
| Pseudocode | No | No pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper provides a link to an environment, not specific code for the methodology presented. |
| Open Datasets | Yes | We design navigation environments in the Street Nav suite by extending the dataset and environment available from Street Learn1 through the addition of driving instructions from Google Maps by randomly sampling start and goal positions. We make the described environments, data and tasks available at http://streetlearn.cc. |
| Dataset Splits | Yes | We designate geographically separated training, validation and testing environments. Specifically, we reserve Lower Manhattan in New York City for training and use parts of Midtown for validation. Agents are evaluated both in-domain (a separate area of upper NYC), as well as out-of-domain (Pittsburgh). |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models) are mentioned. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned. |
| Experiment Setup | Yes | In all experiments, four agents are trained for a maximum of 1 billion steps. We randomly sample learning rates (1e 4 ď λ ď 2.5e 4) and entropy (5e 4 ď σ ď 5e 3) for each training run. |