Quantifying Consistency and Information Loss for Causal Abstraction Learning
Authors: Fabio Massimo Zennaro, Paolo Turrini, Theodoros Damoulas
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we illustrate the flexibility of our setup by empirically showing how different measures and algorithmic choices may lead to different abstractions. We run empirical simulations for the two scenarios in Fig. 2 |
| Researcher Affiliation | Academia | Fabio Massimo Zennaro , Paolo Turrini and Theodoros Damoulas University of Warwick,Coventry, United Kingdom {fabio.zennaro, p.turrini, t.damoulas}@warwick.ac.uk |
| Pseudocode | Yes | Algorithm 1 Overall IC error evaluation; Algorithm 2 Abstraction evaluation |
| Open Source Code | Yes | All simulations are available online1. 1https://github.com/FMZennaro/Causal Abstraction/tree/main/ papers/2023-quantifying-consistency-and-infoloss |
| Open Datasets | Yes | using a lung cancer model from [Guyon et al., 2008] |
| Dataset Splits | No | The paper mentions 'Empirical distributions are computed from 10^4 samples; means and standard deviations are computed out of 10 repetitions' but does not specify explicit training, validation, or test dataset splits. |
| Hardware Specification | No | The paper describes running empirical simulations but does not provide any specific hardware specifications (e.g., GPU/CPU models, memory) used for these experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries with their versions) used in the experiments. |
| Experiment Setup | Yes | Empirical distributions are computed from 10^4 samples; means and standard deviations are computed out of 10 repetitions. Two different solutions are learned by minimizing either IC or ILL. Three different solutions are learned by minimizing ISIL with the three assessment sets. |