Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Universal Black-Box Targeted Reward Poisoning Attack Against Online Deep Reinforcement Learning
Authors: Yinglun Xu, Gagandeep Singh
TMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, on a diverse set of popular DRL environments learned by state-of-the-art DRL algorithms, we verify that our attack efficiently leads the learning agent to various target policies with limited budgets. ... 4 Experiments Although the assumption that the algorithm used by the agent is an efficient learning algorithm in Definition 3.1 may not strictly hold in practice, we demonstrate in this section that our adaptive target attack is effective in the universal learning scenarios we test. |
| Researcher Affiliation | Academia | Yinglun Xu EMAIL Department of Computer Science University of Illinois at Urbana-Champaign Gagandeep Singh EMAIL Department of Computer Science University of Illinois at Urbana-Champaign |
| Pseudocode | Yes | Algorithm 1 Adaptive Target Attack Framework 1: Input: target policy π 2: Params: distance measure d, maximal per-step perturbation , polynomial factor q 3: for t = 1, 2, . . . T do 4: Observe environment state st, agent s action at, and reward signal rt 5: Perturb reward signal rt rt d(at, π (st))q |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its own source code for the methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | Here, we focus on the standard continuous robotic control problems from Mujoco (Todorov et al., 2012), including Half Cheetah, Hopper, and Walker. We verify that our attack works efficiently against the environments with discrete action spaces in Appendix C. For the learning algorithms, we consider state-of-the-art DRL algorithms, including DDPG (Lillicrap et al., 2015), TD3 (Dankwa & Zheng, 2019), and SAC (Haarnoja et al., 2018). ... D4rl: Datasets for deep data-driven reinforcement learning. ar Xiv preprint ar Xiv:2004.07219, 2020. (Fu et al., 2020). ... Gym (Brockman et al., 2016): Mountain Car and Acrobot. |
| Dataset Splits | No | The paper describes using standard online reinforcement learning environments and mentions the total number of training steps (e.g., "The number of training steps is set as 6 10^5") but does not specify traditional train/test/validation dataset splits, which are less common in online RL settings where agents continuously interact with an environment. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions specific DRL algorithms such as DDPG, TD3, SAC, DQN, and PPO, but it does not specify the version numbers of any software libraries, frameworks, or programming languages used for implementation. |
| Experiment Setup | Yes | The number of training steps is set as 6 x 10^5... For the Half Cheetah environment, the maximal per-step corruption is set as B = 50; for the Hopper and Walker environments, the value is set as B = 20. ... we test the attack with q = {0.5, 1, 2, 4} ... For the Half Cheetah environment, we set C/T = 4; for the Walker and Hopper environments, we set C/T = 3. |