Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
RobustZero: Enhancing MuZero Reinforcement Learning Robustness to State Perturbations
Authors: Yushuai Li, Hengyu Liu, Torben Bach Pedersen, Yuqiang He, Kim Guldstrand Larsen, Lu Chen, Christian S. Jensen, Jiachen Xu, Tianyi Li
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two classical control environments, three energy system environments, three transportation environments, and four Mujoco environments demonstrate that Robust Zero can outperform state-of-the-art methods at defending against state perturbations. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Aalborg University, Aalborg, Denmark 2School of Computer, Electronics and Information, Guangxi University, Nanning, China 3College of Computer Science, Zhejiang University, Hangzhou, China. |
| Pseudocode | Yes | Algorithm 1 provides the pseudocode for each round of data collection. Algorithm 2 Training process of Robust Zero |
| Open Source Code | No | The text mentions: "The configurations for S-Mu Zero, S-Mu Zero-worst, and S-Mu Zero-random are available on open science platforms alongside our project." This statement refers to configurations of baseline methods and does not explicitly confirm that the source code for the proposed Robust Zero methodology is publicly available via a direct link, specific platform, or in supplementary materials. |
| Open Datasets | Yes | We study Robust Zero on: 1) two classical control environments, including Cart Pole2 and Pendulum3; ... 3) three transportation environments 4, including Highway with discrete action space, Intersection with discrete action space, and Racetrack with continuous action space; and 4) four Mujoco environments with continuous action spaces... 2gymnasium.farama.org/environments/classic control/cart pole/ 3gymnasium.farama.org/environments/classic control/pendulum/ 4github.com/Farama-Foundation/Highway Env |
| Dataset Splits | No | The paper does not provide specific training/test/validation dataset splits. It mentions evaluating "the average episodic rewards the standard deviation over 50 episodes" for comparison, which describes evaluation episodes rather than data partitioning for model training and testing. |
| Hardware Specification | Yes | We conduct experiments on an 8-core Intel Xeon E5-2640 v4 @ 2.40GHz CPU, with each node equipped with a GeForce RTX 3090 GPU, 2.40GHz processor, and 24GB RAM. |
| Software Dependencies | No | The paper mentions several baseline methods (e.g., ATLA-PPO, PROTECTED, S-Mu Zero) and environments (e.g., gymnasium, Mujoco), but it does not specify versions for programming languages, libraries, or other software components used to implement Robust Zero or run the experiments. |
| Experiment Setup | Yes | The pseudocode in Algorithm 2 lists input parameters like "Number of iterations Niter, number of update per iteration Nu, batch size B, hyperparameters λ1 and λ2, and step-size ς". Additionally, the paper specifies attack radii: "we set ϵ = 0.20 for two classical control environments... ϵ = 0.10 for the three energy environments... ϵ = 0.15 for the three transportation environments... and ϵ = 0.075 for Hopper, ϵ = 0.05 for Walker2d, ϵ = 0.15 for Halfcheetah, and ϵ = 0.15 for Ant, respectively." It also details how w1t and w2t are adjusted dynamically using λ1 and λ2. |