Scalable Belief Propagation via Relaxed Scheduling
Authors: Vitalii Aksenov, Dan Alistarh, Janne H. Korhonen
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications. |
| Researcher Affiliation | Academia | Vitaly Aksenov ITMO University aksenov.vitaly@gmail.com Dan Alistarh IST Austria dan.alistarh@ist.ac.at Janne H. Korhonen IST Austria janne.korhonen@ist.ac.at |
| Pseudocode | No | The paper describes algorithmic steps in numbered lists and paragraphs but does not include formally labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/IST-DASLab/relaxed-bp. |
| Open Datasets | Yes | We run our experiments on four MRFs of moderate size: a binary tree of size 10^7, an Ising model [14, 20] of size 10^3 x 10^3, a Potts [37] of size 10^3 x 10^3 and the decoding of (3, 6)-LDPC code [29] of size 3 x 10^5. |
| Dataset Splits | No | The paper mentions running experiments on specific Markov Random Field (MRF) models (e.g., Ising model, Potts model, LDPC code) but does not provide details about dataset splits (training, validation, or testing) for these models. Belief propagation is an inference algorithm, so traditional data splits for training machine learning models are not explicitly applicable or described. |
| Hardware Specification | Yes | We executed on a 4-socket Intel Xeon Gold 6150 2.7 GHz server with four sockets, each with 18 cores, and 512GB of RAM. |
| Software Dependencies | Yes | The code is written in Java; we use Java 11.0.5 and Open JDK VM 11.0.5. |
| Experiment Setup | Yes | For all these algorithms, the scheduler is a Multiqueue with 4 priority queues per thread, as discussed in Section 3, which we found to work best (although other values behave similarly). We run the baseline algorithm on one process since it is sequential, while all other algorithms are concurrent and, thus, are executed using 70 threads. For residual splash, we implemented two variants... with the best value of H, which we found to be 10. |