Reinforcement Learning Experience Reuse with Policy Residual Representation
Authors: WenJi Zhou, Yang Yu, Yingfeng Chen, Kai Guan, Tangjie Lv, Changjie Fan, Zhi-Hua Zhou
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with the PRR network on a set of grid world navigation tasks, locomotion tasks, and fighting tasks in a video game. The results show that the PRR network leads to better reuse of experience and thus outperforms some state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2Net Ease Fuxi AI Lab, Hangzhou, China |
| Pseudocode | Yes | Algorithm 1 Module training of Lij, Algorithm 2 Experience acquiring with PRR model, Algorithm 3 Experience reusing with PRR model |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It only provides a link to a personal academic homepage, which does not explicitly host the code for this paper. |
| Open Datasets | No | The paper describes custom environments such as 'Fetch The Key tasks', 'Swimmer Gather environment', and a 'fighting video game', but does not provide concrete access information (specific links, DOIs, repositories, or formal citations) for these datasets or environments to make them publicly available or open for reproduction. |
| Dataset Splits | No | The paper does not provide specific dataset split information, such as percentages or sample counts for training, validation, or test sets, to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper only mentions 'PPO' as a reinforcement learning algorithm and 'Mujoco' as a physics engine, but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper states 'All comparison algorithms use the same hyperparameters in the PPO algorithm' but does not provide specific concrete hyperparameter values (e.g., learning rate, batch size, or optimizer settings) in the main text. |