Permutation Weighting
Authors: David Arbour, Drew Dimmery, Arjun Sondhi
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations indicate that permutation weighting provides favorable performance in comparison to existing methods. |
| Researcher Affiliation | Collaboration | 1Adobe Research, San Jose, CA, USA 2Work carried out while at Facebook Core Data Science, Menlo Park, CA, USA 3Forschungsverbund Data Science, University of Vienna, Vienna, AT 4Flatiron Health, New York, NY, USA. |
| Pseudocode | No | The paper describes the method's steps in prose but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | Our first simulation study follows the design of Kang and Schafer (2007). ... To explore the behavior of permutation weighting under continuous treatment regimes with irregularly distributed data, we turn to the data of La Londe (1986), and in particular, the Panel Study of Income Dynamics observational sample of 2915 units... |
| Dataset Splits | Yes | For this experiment, we took one instance of the Kang Schafer simulation (with a correctly specified linear model) described in section 5.1 with a sample size of 2000 and performed 10-fold cross-validation on this data, measuring both in and out-of-sample errors. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions classifiers like logistic regression and gradient boosted decision trees but does not provide specific version numbers for any software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | Hyperparameters tuned were the tree-depth of each decision tree (color), the learning rate, (columns) and the number of trees (not annotated, from 100 to 5000). |