Informative Subspace Learning for Counterfactual Inference
Authors: Yale Chang, Jennifer Dy
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on simulated datasets and real-world datasets demonstrate our proposed approach outperforms existing NNM approaches and other commonly used regression-based methods for counterfactual inference. |
| Researcher Affiliation | Academia | Yale Chang, Jennifer G. Dy Department of Electrical and Computer Engineering Northeastern University, Boston, MA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code (e.g., a specific repository link or an explicit code release statement). |
| Open Datasets | Yes | IHDP dataset is an experimental dataset collected from the Infant Health and Development Program... This dataset is firstly introduced by (Johansson, Shalit, and Sontag 2016) |
| Dataset Splits | No | The paper describes generating multiple sets of outcomes for evaluation ("we repeat the above procedures 100 times and generate 100 sets of outcomes", "repeat the generative process described above 50 times") but does not provide specific percentages or counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using "R implementation" for BART and Causal Forest but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For our NNM-HSIC, we set the number of random features m = l = 100. We vary q, the dimensionality of subspace, from 1 to 10 and observe the result is not sensitive to its value in this range. Therefore we only report the result when q = 1. Gaussian kernel function is used to construct kernel matrices and the scale parameter σ is set using the median heuristic (Scholkopf and Smola 2001). We set the regularization coefficient λ = 10 4 and the result is stable when λ is varied between 10 6 and 10 2. We set the number of random initializations to be 20 and obtain stable results. |