Causal Representation Learning from Multiple Distributions: A General Setting
Authors: Kun Zhang, Shaoan Xie, Ignavier Ng, Yujia Zheng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results verify our theoretical claims. and Simulation studies verified our theoretical findings. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University 2Mohamed bin Zayed University of Artificial Intelligence. Acknowledgements: The authors would also like to acknowledge the support from NSF Grant 2229881, the National Institutes of Health (NIH) under Contract R01HL159805, and grants from Apple Inc., KDDI Research Inc., Quris AI, and Florin Court Capital. |
| Pseudocode | No | The paper describes the model and implementation details in text, but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing open-source code or a link to a code repository for the methodology described. |
| Open Datasets | No | The paper states, 'we run experiments on the simulated data because the ground truth causal adjacency matrix and the latent variables across domains are available for simulated data.' However, it does not provide any specific access information (link, DOI, or citation to a public simulated dataset) for this data. |
| Dataset Splits | No | The paper mentions running experiments on simulated data but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation setup). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using frameworks like VAE and components like MLP, but it does not list any specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA). |
| Experiment Setup | No | The paper describes general setup for simulations (e.g., 'noises are modulated with scaling random sampled from Unif[0.5, 2] and shifts are sampled from Unif[-2, 2]'), but it does not provide comprehensive details on specific hyperparameters (e.g., learning rate, batch size, number of epochs) or system-level training settings. |