Multiply-Robust Causal Change Attribution
Authors: Victor Quintas-Martinez, Mohammad Taha Bahadori, Eduardo Santiago, Jeff Mu, David Heckerman
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method demonstrates excellent performance in Monte Carlo simulations, and we show its usefulness in an empirical application. |
| Researcher Affiliation | Collaboration | 1MIT Department of Economics. 2This work was completed while VQM was an intern at Amazon. 3Amazon. |
| Pseudocode | No | The paper describes the estimation process in Section 2.4 ('Estimation') through numbered steps (Step 1, Step 2, Step 3) using descriptive prose, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | The code for the simulations and the empirical application is available at https://github.com/victor5as/ mr_causal_attribution. |
| Open Datasets | Yes | We use data from the Current Population Survey (CPS) 2015. After applying the same sample restrictions as Chernozhukov et al. (2018c), the resulting sample contains 18,137 male and 14,382 female employed individuals. |
| Dataset Splits | Yes | We randomly split the data into a training set used to estimate the regression function and weights (40%), a calibration set to calibrate the predicted probabilities from our classification algorithm (10%), and an evaluation set used to obtain the main estimates of the ˆθc parameters (50%). |
| Hardware Specification | No | The paper describes Monte Carlo simulations and a real-world application, but it does not provide any specific details about the hardware used (e.g., GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions using 'MLPRegressor and MLPClassifier from the Python library sklearn', 'Gradient Boosting Regressor and Gradient Boosting Classifier from sklearn', 'Hist Gradient Boosting Regressor and Hist Gradient Boosting Classifier from sklearn', and 'LASSO' for learning regressions and weights. However, it does not specify version numbers for these Python libraries or Python itself. |
| Experiment Setup | Yes | We use MLPRegressor and MLPClassifier from the Python library sklearn. We use the default settings for all hyperparameters, except that we allow for early stopping and use 3 hidden layers of 100 neurons each. Second, we consider gradient boosting (Table 1f), using Gradient Boosting Regressor and Gradient Boosting Classifier from sklearn with the default settings. In both cases, the penalty is selected by cross-validation. |