On the Reuse Bias in Off-Policy Reinforcement Learning
Authors: Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Dong Yan, Jun Zhu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that our BIRIS-based methods can significantly improve the sample efficiency on a series of continuous control tasks in Mu Jo Co. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science & Technology, Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University 2Pazhou Laboratory (Huangpu), Guangzhou, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states, 'Our implementation is based on Tianshou [Weng et al., 2022].' This refers to a third-party library used, not the open-sourcing of the authors' own implementation code for BIRIS. |
| Open Datasets | Yes | Gridworld. For the first question, we experiment in Mini Grid1, which includes different shapes of grids with discrete state space and action space, and is simple for optimizing and evaluating our policy. We will calculate and compare the Reuse Bias of PG+IS, PG+WIS [Mahmood et al., 2014], PG+IS+BIRIS, and PG+WIS+BIRIS. |
| Dataset Splits | No | The paper does not provide specific dataset split information for training, validation, and testing (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Our implementation is based on Tianshou [Weng et al., 2022].' but does not provide specific version numbers for Tianshou or any other software dependencies. |
| Experiment Setup | Yes | For different grids (5 5, 5 5-random, 6 6, 6 6-random, 8 8, 16 16), we initialize our policy parameterized with a simple three-layer convolutional neural network. Moreover, we choose 30, 40, and 50 as the replay buffer size and sample trajectories to fill the replay buffer, to test its impact. |