Learning Disentangled Representations for CounterFactual Regression
Authors: Negar Hassanpour, Russell Greiner
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results show that the proposed method achieves state-of-the-art performance in both individual and population based evaluation measures. |
| Researcher Affiliation | Academia | Negar Hassanpour & Russell Greiner Department of Computing Science University of Alberta Edmonton, Alberta, T6G 2E8, CANADA {hassanpo,rgreiner}@ualberta.ca |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper thanks another author for publishing/maintaining the code-base for a *different* method (CFR), but does not state that the code for their proposed DR-CFR method is open-source or provide a link. |
| Open Datasets | Yes | In this work, we use two such benchmarks: our synthetic series of datasets as well as a publicly available benchmark: the Infant Health and Development Program (IHDP) (Hill, 2011). |
| Dataset Splits | No | The paper mentions training sample sizes and discusses evaluation but does not specify explicit training, validation, and test splits with percentages, counts, or a detailed splitting methodology. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory. |
| Software Dependencies | No | The paper mentions that their methods are based on a core code-base (CFR) and references a package (NPCI), but it does not provide specific software names with version numbers for their implementation. |
| Experiment Setup | No | While the paper describes the synthetic data generation process and states that hyperparameters were searched, it does not provide specific hyperparameter values, optimizer settings, or other concrete details of the experimental setup for training their models. |