Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Distributionally Robust Policy Evaluation under General Covariate Shift in Contextual Bandits

Authors: Yihong Guo, Hao Liu, Yisong Yue, Anqi Liu

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results indicate that our approach significantly outperforms baseline methods, most notably in 90% of the cases under the policy shift-only settings and 72% of the scenarios under the general covariate shift settings.
Researcher Affiliation Academia Yihong Guo EMAIL Department of Computer Science, Johns Hopkins University Hao Liu EMAIL Department of Computing and Mathematical Sciences, Caltech Yisong Yue EMAIL Department of Computing and Mathematical Sciences, Caltech Anqi Liu EMAIL Department of Computer Science, Johns Hopkins University
Pseudocode Yes Algorithm 1 Stochastic Gradient Descent for Robust Regression under General Covariate Shift
Open Source Code Yes The code for the experiments is available at https://github.com/guoyihonggyh/Distributionally-Robust-Policy-Evaluationunder-General-Covariate-Shift-in-Contextual-Bandits.
Open Datasets Yes In line with the experimental settings employed in previous studies (Dudik et al., 2014; Wang et al., 2017; Farajtabar et al., 2018; Su et al., 2019b;a), we conduct experiments on 9 UCI datasets by transforming the classification problems to the contextual bandits setting.
Dataset Splits Yes 1. Data Split: We randomly split the original dataset into 75% training set DTR and 25% test set DTS.
Hardware Specification No The paper does not explicitly mention specific hardware details such as GPU/CPU models, memory, or cloud instances used for running the experiments.
Software Dependencies No The paper mentions software components like logistic regression models, SGD for optimization, and implicit use of common ML libraries but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes Hyperparameters. The base distribution for robust regression is a Gaussian distribution with mean = 0.6 and variance = 1. θ is updated with SGD. We tune the hyperparameters with a grid search on the learning rate in [0.001,0.0005]. We also set the learning rate decay for the learning of θ, where the learning rate is multiplied by 10 10+ i 1 at i-th epoch. The batch size is searched in [8, 32, 64, 256].