Balancing Explicability and Explanations in Human-Aware Planning

Authors: Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical evaluations demonstrate the effectiveness of the approach from the robot s perspective, while the study highlight its usefulness in being able to conform to expected normative behavior.
Researcher Affiliation Collaboration Tathagata Chakraborti1 , Sarath Sreedharan2 and Subbarao Kambhampati2 1IBM Research AI, Cambridge MA 02142 USA 2Arizona State University, Tempe AZ 85281 USA tchakra2@ibm.com, {ssreedh3, rao}@asu.edu
Pseudocode Yes Algorithm 1 MEGA
Open Source Code Yes The code is available at https://bit.ly/2XTKHz0.
Open Datasets Yes We will illustrate this trade-off on modified versions of two popular IPC domains. From the International Planning Competition (IPC) 2011: http://www.plg.inf.uc3m.es/ipc2011-learning/Domains.html
Dataset Splits No The paper uses standard IPC domains and a custom USAR domain but does not explicitly describe train/validation/test splits or their sizes.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., programming languages, libraries, or specific solver versions) used in the experiments.
Experiment Setup No The paper discusses the hyper-parameter α but does not provide concrete details on other experimental setup parameters such as learning rates, batch sizes, optimizers, or specific training configurations.