Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PNAct: Crafting Backdoor Attacks in Safe Reinforcement Learning

Authors: Weiran Guo, Guanjun Liu, Ziyuan Zhou, Ling Wang

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct experiments to evaluate the effectiveness of our proposed backdoor attack framework, evaluating it with the established metrics.
Researcher Affiliation Academia Weiran Guo1 , Guanjun Liu1 , Ziyuan Zhou1 and Ling Wang1 1Tongji University EMAIL
Pseudocode Yes Algorithm 1: Training process of PNAct
Open Source Code Yes Our code and supplementary material are available at https://github.com/azure-123/PNAct.
Open Datasets Yes To verify whether PNAct satisfies the safety reinforcement learning backdoor attack requirements, we apply it to the Safety-Gymnasium environment [Ji et al., 2023], which has strict safety constraints.
Dataset Splits No The paper mentions that experiments were conducted in the Safety-Gymnasium environment over '100 complete episodes' for evaluation, but it does not specify explicit training, validation, or test dataset splits in terms of percentages, sample counts, or specific predefined split files, which is characteristic of traditional dataset-based experiments. For RL, the concept of a fixed dataset split is often replaced by continuous interaction with the environment for training and evaluation.
Hardware Specification No The paper does not explicitly mention any specific hardware details such as GPU models, CPU models, memory specifications, or cloud computing resources used for running the experiments.
Software Dependencies No The paper mentions several algorithms used (e.g., PPO, PPO-Lag, TRPO-Lag, RCPO) but does not provide specific version numbers for any software libraries, frameworks, or programming languages (e.g., Python, PyTorch, TensorFlow versions) that would be needed for replication.
Experiment Setup No The paper describes the attack signal generation interval (f = n |ฯ„|) and duration (k = |ฯ„|), and mentions varying 'n' in {5, 10, 15, 20, 25} for experiments. It also lists weighting factors (ฮฑ, ฮฒ, ยต) and a balancing factor (ฮป) in the loss function, but does not provide specific numerical values for these hyperparameters. Key training details such as learning rates, batch sizes, number of training steps/epochs, or optimizer configurations for the RL algorithms are not specified in the main text.