Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness

Authors: Xiaoyu Wen, Xudong Yu, Rui Yang, Haoyuan Chen, Chenjia Bai, Zhen Wang

JAIR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results illustrate the superiority of RO2O in facilitating stable offline-to-online learning and achieving significant improvement with limited online interactions. 5. Experiments We present a comprehensive evaluation of RO2O in the context of the Offline-to-Online RL setting.
Researcher Affiliation Academia Xiaoyu Wen EMAIL Northwestern Polytechnical University Xi an, Shaanxi, China; Xudong Yu EMAIL Harbin Institute of Technology Harbin, Heilongjiang, China; Rui Yang EMAIL The Hong Kong University of Science and Technology Hong Kong, China; Haoyuan Chen chen EMAIL Northwestern Polytechnical University Xi an, Shaanxi, China; Chenjia Bai EMAIL (Corresponding author) Shanghai Artificial Intelligence Laboratory Shanghai, China Shenzhen Research Institute of Northwestern Polytechnical University Shenzhen, Guang Dong, China; Zhen Wang EMAIL (Corresponding author) Northwestern Polytechnical University Xi an, Shaanxi, China
Pseudocode Yes Algorithm 1 Robust Offline-to-Online RL algorithm
Open Source Code Yes The code is available in this repository (https://github.com/Battle Wen/RO2O).
Open Datasets Yes Our experiments are conducted on challenging environments from the D4RL (Fu et al., 2020) benchmark, specifically focusing on the Mujoco and Ant Maze tasks.
Dataset Splits No The paper uses D4RL (Fu et al., 2020) benchmark datasets, but does not provide explicit training/test/validation splits in terms of percentages or counts. It describes using entire D4RL datasets for offline pre-training and then collecting additional data through online fine-tuning.
Hardware Specification Yes All methods are run on a single machine with one GPU (NVIDIA Ge Force RTX 3090).
Software Dependencies No The paper mentions using implementations from CORL, but does not provide specific version numbers for key software components such as Python, PyTorch, or CUDA.
Experiment Setup Yes All the hyper-parameters used in RO2O for the benchmark experiments are listed in Table 4. All the hyper-parameters used in RO2O for the benchmark experiments are listed in Table 5. For offline phase, We train agents for 2.5M gradient steps over all datasets with an ensemble size of N = 10. Then we run online fine-tuning for an additional 250K environment interactions.