Fast Instrument Learning with Faster Rates
Authors: Ziyu Wang, Yuhao Zhou, Jun Zhu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation studies demonstrate the competitive performance of our method. (Abstract) and We use a DNN as the black-box learner, and a RBF kernel for H, with bandwidth determined by marginal likelihood (72). We set N1 = N2 {500, 2500, 5000}. (Section 7) |
| Researcher Affiliation | Collaboration | Ziyu Wang, Yuhao Zhou, Jun Zhu Dept. of Comp. Sci. and Tech., BNRist Center, State Key Lab for Intell. Tech. & Sys., Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University {wzy196,yuhaoz.cs}@gmail.com, dcszj@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 Kernelized IV with learned instruments. |
| Open Source Code | Yes | Code available at https://github.com/meta-inf/fil. |
| Open Datasets | Yes | Our main simulation setup is adapted from [18, 19]; Appendix H.4 presents additional experiment on the demand dataset [6, 17]. and MNIST [62] or CIFAR-10 [63] image |
| Dataset Splits | No | The paper discusses validation statistics and sample sizes (n1, n2) but does not explicitly specify the percentages or counts for training, validation, or test splits. It implicitly uses standard splits for datasets like MNIST/CIFAR-10 but doesn't state them for the custom simulation setup. |
| Hardware Specification | Yes | All experiments are conducted on a single Nvidia RTX 3090 GPU, and we use a single core for experiments involving tree-based methods. (Appendix H.1) |
| Software Dependencies | No | The paper mentions using DNNs and kernels but does not specify the versions of software libraries (e.g., PyTorch, TensorFlow, scikit-learn) or programming languages used. |
| Experiment Setup | Yes | We use a DNN as the black-box learner, and a RBF kernel for H, with bandwidth determined by marginal likelihood (72). We set N1 = N2 {500, 2500, 5000}. (Section 7). DNN: Three fully connected layers with 128 neurons and ReLU activation. Regularization parameter 10 3. Training: 100 epochs with Adam optimizer and learning rate 0.001. (Appendix H.1.2) |