Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Double Buffers CEM-TD3: More Efficient Evolution and Richer Exploration
Authors: Sheng Zhu, Chun Shen, Shuai Lü, Junhong Wu, Daolong An
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experimental evaluations on five continuous control tasks provided by Open AI Gym. DBCEM-TD3 outperforms CEM-TD3, TD3, and other classic off-policy reinforcement learning algorithms in terms of performance and sample efficiency. |
| Researcher Affiliation | Academia | 1Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, China 2College of Software, Jilin University, Changchun 130012, China 3College of Computer Science and Technology, Jilin University, China EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: DBCEM-TD3 |
| Open Source Code | Yes | Our code is available at https://github.com/ShengZhujli/DBCEMTD3. |
| Open Datasets | Yes | We conduct experiments on five continuous control tasks on Mu Jo Co (Todorov, Erez, and Tassa 2012) hosted on Open AI Gym (Brockman et al. 2016). |
| Dataset Splits | No | The performance of all curves is the average of 10 evaluations every 5000 steps. No specific train/validation/test dataset splits were provided. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory amounts, or detailed computer specifications) were provided for running the experiments. |
| Software Dependencies | No | No specific software dependency versions (e.g., Python, PyTorch, TensorFlow versions) were explicitly listed in the paper. |
| Experiment Setup | Yes | Our DBCEM-TD3 implementation is based on CEM-TD3, and the parameter settings are shown in Appendix. Hyperparameter N not only represents the number of generated new actors at each iteration, but also the number of critics in the critic buffer... Figure 6(a) shows the impact of different N on performance... Hyperparameter α controls the degree of decay of actor fitness in the actor buffer... Figure 6(b) shows the impact of different α on performance on Half Cheetah-v2. |