Debiased Machine Learning without Sample-Splitting for Stable Estimators
Authors: Qizhao Chen, Vasilis Syrgkanis, Morgane Austern
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experimental Evaluation We consider a synthetic experimental evaluation of our main theoretical findings. We focus on the partially linear model with a scalar outcome Y 2 R, a scalar continuous treatment T 2 R and many controls X 2 Rnx, where: T = p0(X) + , N(0, 1), Y = 0T + f0(X) + , N(0, 1). |
| Researcher Affiliation | Academia | Qizhao Chen Harvard University Cambridge, MA 02138 qizhaochen@g.harvard.edu Vasilis Syrgkanis Stanford University Stanford, CA 94305 vsyrgk@stanford.edu Morgane Austern Harvard University Cambridge, MA 02138 morgane.austern@gmail.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the described methodology. |
| Open Datasets | No | We consider a synthetic experimental evaluation of our main theoretical findings. We focus on the partially linear model with a scalar outcome Y 2 R, a scalar continuous treatment T 2 R and many controls X 2 Rnx, where: T = p0(X) + , N(0, 1), Y = 0T + f0(X) + , N(0, 1). |
| Dataset Splits | No | For the cross-fitted estimates we used 2 splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions machine learning models like 1-nearest neighbor and random forest regression, but does not provide specific software names with version numbers. |
| Experiment Setup | No | The paper mentions using sub-sampled 1-nearest neighbor and random forest regression with sub-sample sizes based on theoretical specifications (e.g., m = n0.49), but it does not provide comprehensive experimental setup details like specific hyperparameters (learning rates, batch sizes), optimizer settings, or model initialization details. |