Extrapolating Paths with Graph Neural Networks

Authors: Jean-Baptiste Cordonnier, Andreas Loukas

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments with GPS traces on a road network and user-navigation on Wikipedia confirm that GRETEL can adapt to graphs with very different properties, while comparing favorably to previous solutions.
Researcher Affiliation Academia Jean-Baptiste Cordonnier and Andreas Loukas Ecole Polytechnique F ed erale de Lausanne {jean-baptiste.cordonnier, andreas.loukas}@epfl.ch
Pseudocode No The paper describes algorithmic steps and mathematical formulations for GRETEL's components and operations but does not present them in clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Information about the datasets and hyper-parameters are displayed in Table 1, code and datasets are publicly available at https://github.com/jbcdnr/gretel-path-extrapolation.
Open Datasets Yes Information about the datasets and hyper-parameters are displayed in Table 1, code and datasets are publicly available at https://github.com/jbcdnr/gretel-path-extrapolation. ... We ran an experiment based on a small dataset of food deliveries (229 traces) occurring over the Open Street Map road network of Lausanne (18156 nodes, 32468 edges). ... In the Wikispeedia game [West et al., 2009]...
Dataset Splits No Table 1 lists 'train/test 80% / 20%' for both GPS and Wikispeedia datasets, but no explicit validation split percentage or methodology is mentioned.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper mentions training parameters like 'Adam learning rate' but does not specify software dependencies (e.g., Python, PyTorch, TensorFlow versions) with specific version numbers.
Experiment Setup Yes Information about the datasets and hyper-parameters are displayed in Table 1: Adam learning rate 0.01 0.1 batch size 5 10 # epochs 200 5.