Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset
Authors: Yiqin Yang, Quanwei Wang, Chenghao Li, Hao Hu, Chengjie Wu, Yuhua Jiang, Dianyu Zhong, Ziyou Zhang, Qianchuan Zhao, Chongjie Zhang, Bo XU
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that the data subsets identified by Re DOR not only boost algorithm performance but also do so with significantly lower computational complexity. ... We evaluate REDOR on the D4RL benchmark (Fu et al., 2020). Comparison against various baselines and ablations shows that the data subsets constructed by the REDOR can significantly improve algorithm performance with low computationally expensive. |
| Researcher Affiliation | Academia | 1The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences 2Tsinghua University 3Washington University in St. Louis |
| Pseudocode | Yes | Algorithm 1 Reduce Dataset for Offline RL (REDOR) ... Algorithm 2 OMP algorithm |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | Empirically, we evaluate REDOR on the D4RL benchmark (Fu et al., 2020). |
| Dataset Splits | No | The paper mentions using the D4RL benchmark and creating reduced datasets from it, but it does not explicitly specify exact training, validation, or test splits (e.g., percentages or sample counts) for its own experiments, nor does it refer to specific standard splits used from the D4RL benchmark. |
| Hardware Specification | Yes | All experiments are conducted on the same computational device (Ge Force RTX 3090 GPU). |
| Software Dependencies | No | The paper mentions the use of 'Optimizer Adam' and backbones like TD3+BC and IQL, but does not provide specific version numbers for any software, libraries, or programming languages. |
| Experiment Setup | Yes | Table 5: Hyper-parameters sheet of REDOR Hyperparameter Value Optimizer Adam Critic learning rate 3e-4 Actor learning rate 3e-4 Mini-batch size 256 Discount factor 0.99 Target update rate 5e-3 Policy noise 0.2 Policy noise clipping (-0.5, 0.5) TD3+BC regularized parameter 2.5 REDOR Parameters Value Training rounds T 50 m 50 ϵ 0.01 |