Bounds on Representation-Induced Confounding Bias for Treatment Effect Estimation
Authors: Valentyn Melnychuk, Dennis Frauen, Stefan Feuerriegel
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our bounds in a series of experiments. |
| Researcher Affiliation | Academia | Valentyn Melnychuk, Dennis Frauen & Stefan Feuerriegel LMU Munich & Munich Center for Machine Learning Munich, Germany |
| Pseudocode | No | The paper describes its neural refutation framework in three stages (Stage 0, Stage 1, Stage 2) but does not provide pseudocode or a formally labeled algorithm block. |
| Open Source Code | Yes | 1Code is available at https://github.com/Valentyn1997/RICB. |
| Open Datasets | Yes | IHDP100 dataset. The Infant Health and Development Program (IHDP) (Hill, 2011; Shalit et al., 2017) is a classical benchmark for CATE estimation... HC-MNIST dataset. HC-MNIST is a semi-synthetic benchmark on top of the MNIST image dataset (Jesson et al., 2021). The MNIST dataset contains ntrain = 60, 000 train and ntest = 10, 000 test images. (Le Cun, 1998) |
| Dataset Splits | Yes | We performed hyperparameter tuning at all the stages of our refutation framework for all the networks based on five-fold cross-validation using the training subset. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory, or cluster specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like 'Py Torch and Pyro' and optimizers like 'Adam W', but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Implementation. ... Each network was trained with niter = 5, 000 train iterations. Hyperparameters. We performed hyperparameter tuning at all the stages of our refutation framework for all the networks based on five-fold cross-validation using the training subset. At each stage, we did a random grid search with respect to different tuning criteria. Table 5 provides all the details on hyperparameters tuning. |