Combining parametric and nonparametric models for off-policy evaluation
Authors: Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, Finale Doshi-Velez
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Across a variety of domains, our mixture-based approach outperforms the individual models alone as well as state-of-the-art importance sampling-based estimators. and 6. Experimental Results |
| Researcher Affiliation | Academia | Omer Gottesman 1 Yao Liu 2 Scott Sussex 1 Emma Brunskill 2 Finale Doshi-Velez 1, 1Harvard University 2Stanford University. |
| Pseudocode | Yes | Algorithm 1 presents the pseudo-code for our Mo E simulator. |
| Open Source Code | No | The information required to answer this question is not found in the paper. The paper does not provide any explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We compare our Mo E simulator with different OPE estimators for two medical simulators: one for cancer (Ribba et al., 2012) and one for HIV (Ernst et al., 2006) patients. |
| Dataset Splits | No | The information required to answer this question is not found in the paper. The paper mentions generating '100 observed trajectories' but does not provide specific training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit standard splits). |
| Hardware Specification | No | The information required to answer this question is not found in the paper. The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The information required to answer this question is not found in the paper. The paper mentions training a 'feed-forward neural net' but does not provide specific software dependencies or their version numbers (e.g., libraries, frameworks, or solvers). |
| Experiment Setup | Yes | The parametric model is trained as a feed-forward neural net with one layer of 64 hidden units with a tanh activation function. |