Collaborative Causal Inference with Fair Incentives
Authors: Rui Qiao, Xinyi Xu, Bryan Kian Hsiang Low
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the effectiveness of our reward scheme using simulated and real-world datasets. |
| Researcher Affiliation | Collaboration | Rui Qiao 1 Xinyi Xu 1 2 Bryan Kian Hsiang Low 1 1Department of Computer Science, National University of Singapore, Republic of Singapore. 2Institute for Infocomm Research, A STAR, Republic of Singapore. |
| Pseudocode | No | The paper describes procedures in text, but there are no formal "Algorithm" or "Pseudocode" blocks or figures. |
| Open Source Code | Yes | Our implementation can be found at https://github.com/qiaoruiyt/ Collab Causal Inference. |
| Open Datasets | Yes | TCGA (Weinstein et al., 2013) is a modified large-scale dataset collected from a public cancer genomics program named The Cancer Genome Atlas (TCGA), on the effectiveness of different treatments in curing cancer. [...] JOBS (Lalonde, 1984) consists of experimental samples originating from National Supported Work Demonstration (NSW), a US-based job training program to help disadvantaged individuals. [...] IHDP (Hill, 2011) is a simulated dataset based on a real randomized experiment named Infant Health and Development Program (IHDP), which aims to evaluate the treatment effect of high-quality child care provided by specialists on premature infants. |
| Dataset Splits | No | The paper mentions partitioning data for simulating parties and existing splits within datasets (e.g., "We follow the split by (Louizos et al., 2017; Shalit et al., 2017)" for JOBS), but it does not specify explicit training, validation, and test splits for the reproducibility of its own model training in the typical ML sense. |
| Hardware Specification | Yes | All experiments are run on Intel Xeon Gold 6226R CPU only. Typically, 8-cores are used for more efficient parallel computing. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., names of programming languages, libraries, or frameworks with their respective versions) that are crucial for reproducibility. |
| Experiment Setup | No | The paper states "We perform all experiments using POR with linear models for simplicity," which indicates a model choice but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations for the experimental setup. |