Intervention Generalization: A View from Factor Graph Models

Authors: Gecia Bravo-Hermsdorff, David Watson, Jialin Yu, Jakob Zeitler, Ricardo Silva

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run a number of semi-synthetic experiments to evaluate the performance of the IFM approach on a range of intervention generalization tasks. Results. We evaluate model performance based on the proportional root mean squared error (p RMSE), defined as the average of the squared difference between the ground truth Y and estimated b Y , with each entry further divided by the ground truth variance of the corresponding Y . Results are visualized in Fig. 5.
Researcher Affiliation Academia Department of Statistical Science, University College London. Department of Informatics, King s College London. Department of Computer Science, University College London.
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code for reproducing all results and figures is available online8; in Appendix E, we provide a detailed description of the datasets and models; and in Appendix F we present further analysis and results. (Footnote 8: https://github.com/rbas-ucl/intgen)
Open Datasets Yes Datasets. Our experiments are based on the following two biomolecular datasets: i) Sachs [66]: a cellular signaling network with 11 nodes... ii) DREAM [30]: Simulated data based on a known E. coli regulatory sub-network...
Dataset Splits No The paper refers to 'training regimes' and 'test regimes' but does not provide specific percentages or counts for training, validation, and test splits, nor does it explicitly mention a validation set.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or memory.
Software Dependencies No The paper mentions software like XGBoost and MLPs, but does not provide specific version numbers for any software dependencies, libraries, or frameworks used.
Experiment Setup Yes Causal-DAG: a DAG model following the DAG structure and data provided by the original Sachs et al. and DREAM sources (DAGs shown in Fig. 10(a)) and Fig. 11(a), respectively). Given the DAG, we fit a model where each conditional distribution is a heteroskedastic Gaussian with mean and variance parameterized by MLPs (with 10 hidden units) of the respective parents. ii) Causal-IFMs: the corresponding IFM is obtain by a direct projection of the postulated DAG factors (as done in, e.g., Fig. 2(b)). The likelihood is a neural energy model (Section 4) with MLPs with 15 hidden units defining potential functions.