Tractable Uncertainty for Structure Learning

Authors: Benjie Wang, Matthew R Wicker, Marta Kwiatkowska

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform an empirical validation of the TRUST framework. We show the results in Figure 2 for all methods. TRUSTD and TRUST-G match or outperform their counterparts across all metrics, with especially strong performance on E-SHD, where TRUST-G is best by a clear margin for both d = 16, 32.
Researcher Affiliation Academia Benjie Wang 1 Matthew Wicker 1 Marta Kwiatkowska 1 1Department of Computer Science, University of Oxford, Oxford, United Kingdom.
Pseudocode No The paper describes procedural steps in text (e.g., in "C. Causal Effect Computation" or "D. Order SPN Structure Learning Oracles"), but it does not present any formal pseudocode or algorithm blocks.
Open Source Code Yes Our implementation is available at https://github. com/wangben88/trust.
Open Datasets No All methods tested employ the BGe marginal likelihood. For each experiment, a dataset Dtrain of N = 100 datapoints is generated for each graph for inference." The paper describes generating synthetic data but does not provide concrete access information (link, DOI, specific citation) for a publicly available dataset.
Dataset Splits No The paper mentions "a dataset Dtrain of N = 100 datapoints is generated" and "Dtest to denote a held-out dataset of 1000 datapoints." While it distinguishes between training and testing data, it does not explicitly describe a separate validation set or its split percentage/size for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory).
Software Dependencies No Our implementation of TRUST uses the Py Torch framework". No specific version numbers for PyTorch or other software dependencies are provided.
Experiment Setup Yes Parameter learning in the SPN was performed by optimizing the ELBO objective using the Adam optimizer with learning rate 0.1 and for 700 iterations." and "In the d = 16, 32 cases, we used expansion factors of K = [64, 16, 6, 2], [32, 8, 2, 6, 2] respectively" and "We ran DIBS with N = 30 particles and 3000 epochs... while GADGET was run using 16 coupled chains and for 320000 MCMC iterations".