Adversarial Learning of Distributional Reinforcement Learning

Authors: Yang Sui, Yukun Huang, Hongtu Zhu, Fan Zhou

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform numerical studies on the Atari 2600 platform to evaluate the proposed method.
Researcher Affiliation Academia 1School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, China 2Departments of Biostatistics, Statistics, Computer Science, and Genetics, The University of North Carolina at Chapel Hill, Chapel Hill, USA.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes In this section, we perform numerical studies on the Atari 2600 platform to evaluate the proposed method.
Dataset Splits No The paper mentions training models but does not provide specific details on dataset splits (e.g., percentages or sample counts) for training, validation, or testing, nor does it reference predefined standard splits with proper citations.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper references TensorFlow and PyTorch through citations but does not specify version numbers for these or any other software dependencies, which is required for reproducible description.
Experiment Setup Yes Specifically, we focus on the breakout environment and the C51 algorithm while everything can be extended to other games and DRL algorithms. ... We carry out some further analyses by changing the hyperparameter γ from 0.98 to 1. ... The perturbation used in this work takes the form of c f( s), which is proportional to the gradient of the objective function and c is an extremely small constant.