Generalized Value Iteration Networks:Life Beyond Lattices

Authors: Sufeng Niu, Siheng Chen, Hanyu Guo, Colin Targonski, Melissa Smith, Jelena Kovačević

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through intensive experiments we demonstrate the generalization ability of GVIN within imitation learning and episodic Q-learning for various datasets, including synthetic 2D maze data, irregular graphs, and real-world maps (Minnesota highway and New York street maps); we show that GVIN significantly outperforms VIN with discretization input on irregular structures; See Section Experimental Results.
Researcher Affiliation Collaboration Clemson University, 433 Calhoun Dr., Clemson, SC 29634, USA Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA Uber Advanced Technologies Group, 100 32nd St, Pittsburgh, PA 15201, USA
Pseudocode Yes The pseudocode for the algorithm is presented in Supplementary.
Open Source Code No The paper provides a footnote linking to the VIN project's GitHub (1https://github.com/avivt/VIN) for generating 2D mazes, which is a baseline, but does not provide a link or explicit statement about releasing the source code for their proposed GVIN methodology.
Open Datasets Yes We generate 22, 467 2D mazes (16 16) using the same scripts1 that VIN used.
Dataset Splits No The paper states "6/7 data for training and 1/7 data for testing" but does not explicitly mention a separate validation split or subset.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Additional experiment parameter settings are listed in the Supplementary. [...] We set recurrence parameter to K = 200.