Order Statistics for Probabilistic Graphical Models

Authors: David Smith, Sara Rouhani, Vibhav Gogate

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally compared our new algorithm with a baseline sampling algorithm over randomly generated graphical models as well as Chow-Liu trees [Chow and Liu, 1968] computed over three benchmark datasets. We found that our new algorithm significantly outperforms the sampling algorithm, especially when r is not extreme (either too small or too large)." and "7 Experimental Results In this section, we aim to evaluate the performance of Algorithms 1 and 3.
Researcher Affiliation Academia David Smith, Sara Rouhani, and Vibhav Gogate The University of Texas at Dallas dbs014200@utdallas.edu, sxr15053@utdallas.edu, vgogate@hlt.utdallas.edu
Pseudocode Yes Algorithm 1 Find Median Independent Markov Network", "Algorithm 2 Estimate Rank", "Algorithm 3 Rank Variable Elimination", "Algorithm 4 Rank VE Step", "Algorithm 5 Rank VE Step", "Algorithm 6 Rank Variable Elimination Combine Bin Step
Open Source Code No No explicit statement providing concrete access (e.g., repository link, explicit release statement, or mention of supplementary materials) to the source code for the described methodology was found.
Open Datasets Yes We evaluate the infer rank query on 3 benchmark datasets commonly used for evaluating learning algorithms for tractable probabilistic models: NLTCS, KDD Cup, and Plants [Lowd and Davis, 2010; Rahman and Gogate, 2016; Gens and Domingos, 2013].
Dataset Splits No No explicit details on training, validation, or test dataset splits (e.g., percentages, sample counts, or specific predefined splits) were found.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments were provided in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., library names with versions, solver versions) were provided in the paper.
Experiment Setup Yes We use quantization function q(x, α) = α log x, and run the experiment for varying settings of α." and "For each e [0, 80], we generate 100 Markov networks on 20 variables with e pairwise potentials having randomly generated scopes. The weights of each potential are randomly generated from N(0, 1).