Identifying Representations for Intervention Extrapolation
Authors: Sorawit Saengkyongam, Elan Rosenfeld, Pradeep Kumar Ravikumar, Niklas Pfister, Jonas Peters
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our theoretical findings through a series of synthetic experiments and show that our approach can indeed succeed in predicting the effects of unseen interventions. and 6 EXPERIMENTS We now conduct simulation experiments to empirically validate our theoretical findings. |
| Researcher Affiliation | Academia | 1ETH Zürich 2Carnegie Mellon University 3University of Copenhagen |
| Pseudocode | Yes | Algorithm 1: An algorithm for Rep4Ex |
| Open Source Code | Yes | The code for all experiments is included in the supplementary material. |
| Open Datasets | No | The paper uses data generated from defined Structural Causal Models (SCMs) for its experiments (e.g., S(α) and S(γ)), which are synthetic. No concrete access information or citation to a public dataset is provided. |
| Dataset Splits | No | The paper mentions 'training support' and generating '100 test points' for some experiments, but does not provide specific train/validation/test split percentages or sample counts for the main experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'neural networks', 'Adam optimizer', and 'Leaky ReLU activation functions', but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Learning rate: 0.005 Batch size: 256 Optimizer: Adam optimizer with β1 = 0.9, β2 = 0.999 Number of epochs: 1000. and Architecture: three hidden layers with the hidden size of 32 |