Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models

Authors: Yuta Saito, Shota Yasui

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations demonstrate that our metric outperforms existing metrics in both model selection and hyperparameter tuning tasks.
Researcher Affiliation Collaboration 1Tokyo Institute of Technology, 2Cyber Agent, Inc.
Pseudocode Yes Algorithm 1 Counterfactual Cross-Validation (CF-CV)
Open Source Code Yes Our code used to conduct the semi-synthetic experiments is available at https://github.com/usaito/counterfactual-cv
Open Datasets Yes We used the Infant Health Development Program (IHDP) dataset provided by (Hill, 2011).
Dataset Splits Yes We conducted the experimental procedure over 100 different realizations with 35/35/30 train/validation/test splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'scikit-learn' and 'Econ ML' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup No The paper mentions tuning hyperparameters and discusses the hyperparameter search space, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) within the main text.