Local Explanations for Reinforcement Learning
Authors: Ronny Luss, Amit Dhurandhar, Miao Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four domains (four rooms, door-key, minipacman, and pong) and a carefully conducted user study illustrate that our perspective leads to better understanding of the policy. We conduct a task-oriented user study to evaluate effectiveness of our method. |
| Researcher Affiliation | Industry | Ronny Luss , Amit Dhurandhar and Miao Liu IBM Research, Yorktown Heights, NY rluss@us.ibm.com, adhuran@us.ibm.com, miao.liu1@ibm.com |
| Pseudocode | Yes | Algorithm 1: Meta-states MS(S, A, πE, Γ, k, ϵφ, η) and Algorithm 2: Strategic State function SS(Sφ, Γ, ϵg). |
| Open Source Code | No | The paper does not explicitly state that source code for its methodology is available, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper describes the use of environments like Four Rooms, Door-Key, Minipacman, and Pong, which are common in RL, but it does not provide concrete access information (links, DOIs, repositories, or formal citations) for specific datasets used for training within these environments. |
| Dataset Splits | No | The paper does not provide specific details regarding training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined splits) for reproducibility of data partitioning. |
| Hardware Specification | No | The paper states 'Experiments were performed with 1 GPU and up to 16 GB RAM,' but it does not specify the exact models of the GPU or CPU used, which is required for a detailed hardware specification. |
| Software Dependencies | No | The paper describes the use of computational models like convolutional neural networks and Value Iteration, but it does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | The number of strategic states was chosen such that additional strategic states increased the objective value by at least 10%. The number of meta-states was selected as would be done in practice, through cross-validation to satisfy human understanding. SSX is run with local approximations to the state space with the maximum number of steps set to 6 as discussed in Section 3.4. SSX is again run with local approximations to the state space with the maximum number of steps set to 8. |