Stackelberg Planning: Towards Effective Leader-Follower State Space Search

Authors: Patrick Speicher, Marcel Steinmetz, Michael Backes, Jörg Hoffmann, Robert Künnemann

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run experiments on Stackelberg variants of IPC and pentesting benchmarks.
Researcher Affiliation Academia Patrick Speicher, Marcel Steinmetz, Michael Backes, J org Hoffmann, Robert K unnemann CISPA, Saarland University, Saarland Informatics Campus Saarbr ucken, Germany {patrick.speicher,robert.kuennemann}@cispa.saarland, backes@mpi-sws.org, {steinmetz,hoffmann}@cs.uni-saarland.de
Pseudocode Yes Figure 1: Leader-follower search. Search nodes N contain the state N.s and the leader path cost N.L to s. They also cache the follower s best-response cost F (N.s).
Open Source Code No The paper does not provide a link or explicit statement about releasing the source code for their Stackelberg planning implementation. It only states 'Our implementation is based on Fast Downward (Helmert 2006).'
Open Datasets Yes We adapted three domains relating to transportation, namely Logistics (IPC 98), Nomystery (IPC 11), and Visitall (IPC 14); the puzzle game Sokoban (IPC 14); and a simulated pentesting domain Pentest adapted from prior work on this application.
Dataset Splits No The paper does not specify percentages or sample counts for training, validation, and test dataset splits. It only mentions the creation of 'multiple Stackelberg planning instances' and runtime/memory limits.
Hardware Specification Yes We ran all experiments on 2.20 GHz Intel Xeon machines, with runtime/memory limits of 30 mins/4 GB (as typically used in IPC setups).
Software Dependencies No The paper mentions software like 'Fast Downward (Helmert 2006)', 'LM-cut (Helmert and Domshlak 2009)', and 'Aidos... (Seipp et al. 2016)' but does not provide specific version numbers for these software components or libraries beyond the publication year of their respective papers.
Experiment Setup No The paper describes the overall algorithmic approach and pruning techniques, but does not provide specific details on hyperparameters, optimizer settings, or other concrete training configurations (e.g., learning rates, batch sizes).