The Linearization of Belief Propagation on Pairwise Markov Random Fields
Authors: Wolfgang Gatterbauer
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments will answer the following 3 questions: (1) What is the effect of the convergence parameter s on accuracy and number of required iterations until convergence? (2) How accurate is our approximation under varying conditions: (i) the density of the network, (ii) the strength on the interaction, and (iii) the fraction of labeled nodes? (3) How fast is the linearized approximation as compared to standard Loopy BP? Experimental protocol. We define accuracy as the fraction of unlabeled nodes that receive correct labels. In order to evaluate the accuracy of a method, we need to use graphs with known label ground truth (GT). |
| Researcher Affiliation | Academia | Wolfgang Gatterbauer Carnegie Mellon University Pittsburgh, Pennsylvania 15213 |
| Pseudocode | No | The paper describes equations and iterative updates but does not provide a formal pseudocode block or algorithm section. |
| Open Source Code | Yes | An efficient Python implementation is available on Github (SSLH 2015). |
| Open Datasets | No | The paper states it uses "synthetic graphs with known GT" generated by their own "synthetic graph generator". While the generator code is available, the paper does not provide a direct link or citation to a pre-existing, publicly available dataset that was used. It describes how data is created, not accessed. |
| Dataset Splits | No | The paper describes splitting nodes into 'labeled' and 'unlabeled' fractions for evaluation but does not specify a separate 'validation' dataset split for model tuning or hyperparameter selection in the conventional sense of supervised machine learning training. |
| Hardware Specification | Yes | The experiments are run on a 2.5 Ghz Intel Core i5 with 16G of main memory and a 1TB SSD hard drive. |
| Software Dependencies | No | The paper mentions software like 'Python', 'SciPy library (Jones et al. 2001)', 'Py AMG library (Bell, Olson, and Schroder 2011)', and 'Scikit-learn (Pedregosa et al. 2011)'. However, specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | Throughout our experiments, we use k = 3 classes and the potential ψ = 1 h 1 1 1 h h 1 1 , parameterized by a value h representing the ratio between min and max entries. Dividing by (2 + h) centers it around 1. ... We create graphs with n nodes and assign the same fraction of nodes to one of the 3 classes: α = [ 1 3 ]. We also vary the parameters m and d = m n as the average inand outdegree in the graph, and we assume a power law distribution with coefficient 0.3. We then keep a fraction f of node labels and measure accuracy on the remainder. ... For the remaining accuracy experiments, we use s = 0.5 and run our algorithm to convergence. |