Multi-Cause Effect Estimation with Disentangled Confounder Representation
Authors: Jing Ma, Ruocheng Guo, Aidong Zhang, Jundong Li
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic and real-world datasets show the superiority of our proposed framework from different aspects. |
| Researcher Affiliation | Academia | 1University of Virginia, Charlottesville, VA, USA 22904 2Arizona State University, Tempe, AZ, USA 85287 {jm3mr, aidong, jundong}@virginia.edu, rguo12@asu.edu |
| Pseudocode | No | The paper describes the framework with text and diagrams but does not include formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | Yes | We create two semi-synthetic datasets (Amazon-3C and Amazon-6C) based on the real-world Amazon review data2. 2http://jmcauley.ucsd.edu/data/amazon/index 2014.html |
| Dataset Splits | Yes | Each dataset is randomly split into 60%/20%/20% training/validation/test set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU or GPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of neural networks but does not specify any software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Unless otherwise specified, hyperparameters are set as β = 20, λ = 0.4. By default, we set K as the same number of true treatment clusters, then we alter K to test the performance and disentanglement in Section 4.4. All the results are averaged over ten executions. |