Learning How to Ground a Plan – Partial Grounding in Classical Planning

Authors: Daniel Gnad, Álvaro Torralba, Martín Domínguez, Carlos Areces, Facundo Bustos7602-7609

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluation attests that the approach is capable of solving planning instances that are too big to be fully grounded.
Researcher Affiliation Academia Daniel Gnad, Alvaro Torralba Saarland University Saarland Informatics Campus Saarbr ucken, Germany {gnad,torralba}@cs.uni-saarland.de and Mart ın Dom ınguez, Carlos Areces, Facundo Bustos Universidad Nacional de C ordoba C ordoba, Argentina {mardom75, carlos.areces, facundojosebustos}@gmail.com
Pseudocode Yes Algorithm 1: Partial Grounding.
Open Source Code No The paper does not include an unambiguous statement where the authors release their source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets Yes We picked four domains that were part of the learning track of the international planning competition (IPC) 2011 (Blocksworld, Depots, Satellite, and TPP), as well as two domains of the deterministic track of IPC 18 (Agricola and Caldera). For all domains, we used the deterministic track IPC instances and a set of 25 large instances that we generated ourselves for the experiments.
Dataset Splits Yes We evaluated our models in isolation on a set of validation instances that are distinct from both our training and testing set, and small enough to compute the set of operators that are part of any optimal plan.
Hardware Specification No The paper mentions runtime and memory limits (e.g., '30 minutes and 4GB' for the entire process), but it does not provide specific hardware details such as exact GPU/CPU models or processor types used for running its experiments.
Software Dependencies No The paper mentions software like 'Fast Downward planning system (FD)' and 'scikit Python package' but does not provide specific version numbers for these ancillary software components.
Experiment Setup No The paper mentions some experimental settings like running the first iteration of the LAMA planner and time/memory limits ('5 hours and 4GB for the incremental grounding', '10 minutes per iteration' for search), but it does not provide specific hyperparameters or system-level training settings for the machine learning models used (e.g., learning rate, batch size, optimizer settings).