Comparison of meta-learners for estimating multi-valued treatment heterogeneous effects
Authors: Naoufal Acharki, Ramiro Lugo, Antoine Bertoncello, Josselin Garnier
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically confirm the strengths and weaknesses of those methods with synthetic and semi-synthetic datasets. |
| Researcher Affiliation | Collaboration | 1CMAP, Ecole polytechnique, Institut Polytechnique de Paris, Palaiseau, France 2Total Energies One Tech, Palaiseau, France. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and the semi-synthetic dataset in subsection 6.2 are available at https://github.com/nacharki/ multiple T-Meta Learners. |
| Open Datasets | Yes | The code and the semi-synthetic dataset in subsection 6.2 are available at https://github.com/nacharki/ multiple T-Meta Learners. |
| Dataset Splits | No | The paper mentions running experiments with "n = 2000 units" and "n = 10000 units" but does not specify explicit training, validation, or test dataset splits (e.g., percentages or exact sample counts for each split). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as CPU/GPU models, memory, or specific computing cluster configurations. |
| Software Dependencies | No | The paper mentions software components like "XGBoost model" and "Random Forest" as base-learners, but it does not provide specific version numbers for these or other software dependencies, which is necessary for reproducibility. |
| Experiment Setup | Yes | All hyperparameters (e.g. the number of trees, depth etc.) are fixed to their default values during all experiments. |