Graph Learning for Numeric Planning

Authors: Dillon Chen, Sylvie Thiebaux

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our graph kernels are vastly more efficient and generalise better than graph neural networks for numeric planning, and also yield competitive coverage performance compared to domain-independent numeric planners.
Researcher Affiliation Academia Dillon Z. Chen1,2 Sylvie Thiébaux1,2 1LAAS-CNRS, University of Toulouse 2The Australian National University {dillon.chen,sylvie.thiebaux}@laas.fr
Pseudocode Yes Algorithm 1: CCWL algorithm
Open Source Code Yes Code is available at https://github.com/Dillon ZChen/goose
Open Datasets Yes We take 8 domains out of 10 domains from the International Planning Competition 2023 Learning Track (IPC-LT) [SSA23] and either convert them to equivalent numeric formulations, or introduce numeric variables to model extra features such as capacity constraints.
Dataset Splits No The paper mentions '90 testing problems and at most 99 small training problems' but does not explicitly detail a validation split or set.
Hardware Specification Yes All baselines and models are run on a single Intel Xeon Platinum 8268 (2.90 GHz) core with a 5 minute timeout for search and 8GB of main memory.
Software Dependencies No The paper mentions 'CPLEX version 22.11' but does not list multiple key software components with their specific version numbers (e.g., programming languages, other libraries or frameworks).
Experiment Setup Yes Each GNN has a hidden dimension of 64, and is trained with the Adam optimiser [KB15] with an initial learning rate of 10 3 and batch size of 16. A scheduler reduces the training loss by a factor of 10 if loss does not improve after 10 epochs. Training then terminates if the learning rate falls below 10 5.