The Utility of Explainable AI in Ad Hoc Human-Machine Teaming
Authors: Rohan Paleja, Muyleng Ghuy, Nadun Ranawaka Arachchige, Reed Jensen, Matthew Gombolay
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we present two novel human-subject experiments quantifying the benefits of deploying x AI techniques within a human-machine teaming scenario. |
| Researcher Affiliation | Collaboration | Rohan Paleja1, Muyleng Ghuy1, Nadun R. Arachchige1, Reed Jensen2, Matthew Gombolay1 1Georgia Institute of Technology, 2MIT Lincoln Laboratory 1Atlanta, GA 30332, 2Lexington, MA 02420 |
| Pseudocode | No | The paper describes the cobot's policy as 'decision tree-based policies' but does not provide any pseudocode or algorithm blocks. |
| Open Source Code | Yes | We provide a codebase with our experiment domain at https://github.com/CORE-Robotics-Lab/Utility-of-Explainable-AINeur IPS2021. |
| Open Datasets | No | The paper describes human-subjects studies involving participants playing Minecraft in a custom environment. It does not provide access information (link, DOI, citation) for a publicly available dataset of the collected human-subject data or the generated in-game environment data. |
| Dataset Splits | No | The paper describes human-subjects experiments with different conditions and phases, but it does not refer to dataset splits like 'training', 'validation', or 'test' sets in the context of machine learning model development. |
| Hardware Specification | No | The paper states '[N/A]' for hardware specifications when asked if it included 'the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?'. |
| Software Dependencies | No | The paper mentions software components like 'Microsoft Malmo Minecraft AI Project' and 'Pygame interface', but it does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | We utilize a 1 × 3 within-subjects design varying across three abstractions: 1) No explanation of the robot’s hierarchical policy, 2) A status explanation of the cobot’s hierarchical policy, and 3) A decision-tree explanation of cobot’s hierarchical policy. ... Both components of the hierarchical policy are decision tree-based policies of depth two and with four leaf nodes. |