Single Point Transductive Prediction
Authors: Nilesh Tripuraneni, Lester Mackey
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conclude by showing the efficacy of our methods on both synthetic and real data, highlighting the improvements single point transductive prediction can provide in settings with distribution shift. We complement our theoretical analysis with a series of numerical experiments highlighting the failure modes of standard inductive prediction. In Sections 4.1 and 4.2, error bars represent 1 standard error of the mean computed over 20 independent problem instances. We provide complete experimental set-up details in Appendix G and code replicating all experiments at https://github.com/nileshtrip/SPTransducPredCode. |
| Researcher Affiliation | Collaboration | Nilesh Tripuraneni 1 Department of EECS, University of California, Berkeley 2Microsoft Research, New England. Correspondence to: Nilesh Tripuraneni <nilesh_tripuraneni@berkeley.edu>. |
| Pseudocode | No | The paper describes methods using mathematical equations and textual explanations (e.g., for ˆy JM and OM estimators) but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | We provide complete experimental set-up details in Appendix G and code replicating all experiments at https://github.com/nileshtrip/SPTransducPredCode. |
| Open Datasets | Yes | See Appendix G for further details on the methodology and datasets from the UCI dataset repository (Dua & Graff, 2017). |
| Dataset Splits | Yes | We first split our original dataset of n points into two4 disjoint, equal-sized folds (X(1), y(1)) = {(xi, yi) : i 2 {1, . . . , n 2 }} and (X(2), y(2)) = {(xi, yi) : i 2 { n 2 + 1, . . . , n}}. Then, The first fold (X(1), y(1)) is used to run two first-stage regressions. In practice, we use K-fold cross-fitting to increase the sampleefficiency of the scheme as in (Chernozhukov et al., 2017). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, or memory specifications used for running the experiments. It only mentions general experimental setup. |
| Software Dependencies | No | The paper mentions using software like 'CVXPY' (Diamond & Boyd, 2016) and 'Ray' (Moritz et al., 2018) but does not specify version numbers for these or any other software dependencies required for reproducibility. |
| Experiment Setup | Yes | We provide complete experimental set-up details in Appendix G and code replicating all experiments at https://github.com/nileshtrip/SPTransducPredCode. We construct problem instances for Lasso estimation by independently generating xi N(0, Ip), i N(0, 1), and (β0)j N(0, 1) for j less then the desired sparsity level sβ0 while (β0)j = 0 otherwise. We fit the Lasso estimator, JM-style estimator with Lasso pilot, and the OM f-moment estimator with Lasso first-stage estimators. We set all hyperparameters to their theoretically-motivated values. |