Provable Defense against Backdoor Policies in Reinforcement Learning
Authors: Shubham Bharti, Xuezhou Zhang, Adish Singla, Jerry Zhu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that our sanitization defense performs well on two Atari game environments. In this section, we present some experimental results that validate our sanitization algorithm against backdoor attacks in Atari game environments. |
| Researcher Affiliation | Academia | Shubham Kumar Bharti UW-Madison Madison, WI, USA skbharti@cs.wisc.edu; Xuezhou Zhang Princeton University Princeton, NJ, USA xz7392@princeton.edu; Adish Singla MPI-SWS Saarbrücken, Germany adishs@mpi-sws.org; Xiaojin Zhu UW-Madison Madison, WI, USA jerryzhu@cs.wisc.edu |
| Pseudocode | Yes | Algorithm 2 Defense through subspace sanitization |
| Open Source Code | Yes | 1The code available at https://github.com/skbharti/Provable-Defense-in-RL |
| Open Datasets | No | The paper mentions 'Atari game environments' specifically 'Boxing-Ram game' and 'Breakout game' but does not provide concrete access information (URL, DOI, repository name, or formal citation with authors/year) for the specific datasets used in these environments. |
| Dataset Splits | No | The paper does not provide specific train/validation/test dataset split information (exact percentages, sample counts, or citations to predefined splits) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | No | The paper describes neural network architectures and general training schemes, but it lacks specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |