A Maximum-Entropy Approach to Off-Policy Evaluation in Average-Reward MDPs

Authors: Nevena Lazic, Dong Yin, Mehrdad Farajtabar, Nir Levine, Dilan Gorur, Chris Harris, Dale Schuurmans

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our approach to other policy evaluation methods relying on function approximation. We demonstrate the effectiveness of our proposed OPE approaches in multiple environments.
Researcher Affiliation Collaboration Nevena Lazi c Dong Yin Mehrdad Farajtabar Nir Levine Dilan Görür Chris Harris Dale Schuurmans Deep Mind Google 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Dale Schuurmans is also employed by the University of Alberta.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link confirming the release of open-source code for the described methodology.
Open Datasets Yes For experiments with Open AI Gym environments [Brockman et al., 2016] (Taxi and Acrobot). The Taxi environment [Dietterich, 2000]. Linear quadratic (LQ) control system in Dean et al. [2019].
Dataset Splits No The paper does not provide specific details on training, validation, and test dataset splits with percentages or sample counts.
Hardware Specification No The paper does not provide specific details on the hardware used for experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions 'CVXPY', 'Adam [Kingma and Ba, 2014]', and 'POLITEX algorithm [Abbasi-Yadkori et al., 2019a]' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We regularize the covariances of all regression problems using I with tuned .3 For MAXENT, we optimize the parameters using full-batch Adam [Kingma and Ba, 2014], and normalize the distributions empirically. We set to be 0.05-greedy w.r.t. a hand-coded optimal strategy, and β to be "-greedy w.r.t. .