Distributionally Robust $Q$-Learning

Authors: Zijian Liu, Qinxun Bai, Jose Blanchet, Perry Dong, Wei Xu, Zhengqing Zhou, Zhengyuan Zhou

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation results further demonstrate its strong empirical robustness.
Researcher Affiliation Collaboration 1New York University, Stern School of Business 2Horizon Robotics Inc., CA, USA 3Department of Management Science and Engineering, Stanford University, CA, USA 4Department of Electrical Engineering and Computer Sciences, UC Berkeley, CA, USA 5Arena Technologies.
Pseudocode Yes Algorithm 1 Distributionally Robust Q-Learning
Open Source Code No The paper does not provide any link or explicit statement about the availability of open-source code for the described methodology.
Open Datasets No The paper describes a custom supply chain model and generates samples from a simulator, indicating a simulated environment rather than a publicly available dataset.
Dataset Splits No The paper does not specify training, validation, and test dataset splits with percentages or counts. It defines a simulated environment and parameters for the learning algorithm.
Hardware Specification No The paper mentions 'limit computation resources' but does not provide any specific details about the hardware used (e.g., CPU, GPU models, or memory).
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup Yes In our experiments, due to the limit computation resources, we fix n = 10. Besides, we take h = 1, p = 2, k = 3 and set the discount factor γ = 0.9 with starting from s1 = 0. ... In the simulation, we set δ = 1 as the perturbation parameter. At the k-th step of Algorithm 1, we set the learning rate αk be 1 1+(1 γ)(k 1) to satisfy the Robbins Monro Condition. ... For the parameter ε used in our estimator, we consider ε {0.49, 0.499.0.5, 0.6}.