Scalable Ultrafast Almost-optimal Euclidean Shortest Paths

Authors: Stefan Funke, Daniel Koch, Claudius Proissl, Axel Schneewind, Armin Weiß, Felix Weitbrecht

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show experimentally that for small instances where optimal solutions are easily available on average our computed paths are less than 0.3% longer than the optimum.
Researcher Affiliation Academia Stefan Funke , Daniel Koch , Claudius Proissl , Axel Schneewind , Armin Weiß and Felix Weitbrecht University of Stuttgart, Germany
Pseudocode No The paper describes algorithms and procedures in detail (e.g., Contraction Hierarchies, path optimization strategies) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Source code and data sets are available on a companion page [Funke, 2024].
Open Datasets Yes To be able to evaluate our algorithms on instances of (almost) arbitrary size, we use a Mercator projection of the coastline polygons of the Open Street Map project [The Open Street Map Project, 2024].
Dataset Splits No The paper evaluates its algorithms on various problem instances using 1000 random source-target pairs but does not specify formal train, validation, or test dataset splits in terms of percentages, counts, or predefined partition files.
Hardware Specification Yes We implemented all algorithms in C++ and evaluated them on a Ryzen 9 5900x 12-core CPU/128GB RAM running Ubuntu Linux 22.04.
Software Dependencies Yes We implemented all algorithms in C++ and evaluated them on a Ryzen 9 5900x 12-core CPU/128GB RAM running Ubuntu Linux 22.04. We use the CGAL library [The CGAL Project, 2023], in particular its geometry kernel, the exact geometric predicates, as well as the constrained Delaunay triangulation code.
Experiment Setup No The paper describes the overall algorithmic approach, including the use of Contraction Hierarchies and path optimization techniques, and mentions some general settings like the use of refined triangulation. However, it does not provide specific numerical hyperparameters or detailed training configurations (e.g., learning rates, epochs, specific optimizer settings) that are typically found in an 'experimental setup' section for reproducibility.