Conditional Distributional Treatment Effect with Kernel Conditional Mean Embeddings and U-Statistic Regression

Authors: Junhyung Park, Uri Shalit, Bernhard Schölkopf, Krikamol Muandet

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic, semi-synthetic and real datasets demonstrate the merits of our approach.
Researcher Affiliation Academia 1Max Planck Institute for Intelligent Systems, T ubingen, Germany 2Technion, Israel Institute of Technology.
Pseudocode Yes Algorithm 1 Kernel conditional discrepancy (KCD) test of conditional distributional treatment effect
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We demonstrate the use of our methods on the Infant Health and Development Program (IHDP) dataset (Hill, 2011, Section 4).
Dataset Splits No The paper describes the IHDP dataset and how outcomes are simulated, but it does not specify explicit train/validation/test split percentages or sample counts.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions 'Python' and the 'Falkon library' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper does not provide specific details about the experimental setup, such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.