Meta-Learners for Partially-Identified Treatment Effects Across Multiple Environments

Authors: Jonas Schweisthal, Dennis Frauen, Mihaela Van Der Schaar, Stefan Feuerriegel

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further demonstrate the effectiveness of our meta-learners across various experiments using both simulated and real-world data.
Researcher Affiliation Academia 1LMU Munich, Germany 2Munich Center for Machine Learning (MCML), Germany 3University of Cambridge, UK.
Pseudocode Yes Algorithm 1: Two-stage learners for estimating bounds
Open Source Code Yes Code is available at https://github.com/JSchweisthal/Bound Meta Learners.
Open Datasets Yes Here, we perform a case study using a dataset with COVID-19 hospitalizations in Brazil across different regions (Baqui et al., 2020).
Dataset Splits Yes To create the simulated data used in Sec. 6, for both datasets, we sample n = 10000 from the data-generating process above. We then split the data into train (70%), val (10%), and test (20%) sets.
Hardware Specification No The paper does not specify the exact hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using a 'software package https://github.com/Alicia Curth/CATENets' and 'Py Torch CATE meta-learners' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Here, the networks for the first- and second-stage models are simple MLPs with 2 hidden layers and hidden neuron size of 100.