Online Planner Selection with Graph Neural Networks and Adaptive Scheduling

Authors: Tengfei Ma, Patrick Ferber, Siyu Huo, Jie Chen, Michael Katz5077-5084

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate the effectiveness of the proposed method against strong baselines, both deep learning and non-deep learning based.
Researcher Affiliation Collaboration Tengfei Ma,1 Patrick Ferber,3,4 Siyu Huo,1 Jie Chen,1,2 Michael Katz1 1IBM Research, 2MIT-IBM Watson AI Lab, 3University of Basel, 4Saarland University {Tengfei.Ma1, siyu.huo, Michael.Katz1}@ibm.com, patrick.ferber@unibas.ch, chenjie@us.ibm.com
Pseudocode No The paper describes the algorithms using mathematical formulations and text, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/matenure/GNN planner.
Open Datasets No We prepare a data set composed of historical and the most recent IPC tasks. Specifically, the historical IPC tasks form the training and validation sets, whereas those of the year 2018 form the test set. The paper refers to "IPC tasks" without providing a specific link, DOI, or formal citation (with authors/year) for public access to this combined dataset.
Dataset Splits Yes We prepare two sets of training/validation splits. The first set reuses the split in Delfi, which conforms to the competition setting where a single model is used for evaluation. On the other hand, to reduce the bias incurred by a single model, for another set we randomly generate 20 splits (with an approximately 9:1 ratio). In ten of them, tasks from the same domain are not separated, whereas in the other ten, they may. We call the former scenario domain-preserving split whereas the latter random split.
Hardware Specification No We used eight CPU cores and one GPU for training. The consumed memory was approximately 5 10GB. This information is not specific enough (e.g., no GPU/CPU model numbers).
Software Dependencies No For the training of the neural networks, we use the Adam optimizer (Kingma and Ba 2015) with learning rate 0.001. No specific software libraries with version numbers are mentioned.
Experiment Setup Yes For the training of the neural networks, we use the Adam optimizer (Kingma and Ba 2015) with learning rate 0.001. We slightly tune other hyperparameters: the number of layers in GCN and steps in GG-NN is selected from {2, 4, 6} and the dimension of the node representations h(t) v is chosen from {100, 150, 200}.