Uprooting and Rerooting Higher-Order Graphical Models
Authors: Mark Rowland, Adrian Weller
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate empirically that rerooting can significantly improve accuracy of methods of inference for higher-order models at negligible computational cost. |
| Researcher Affiliation | Academia | Mark Rowland University of Cambridge mr504@cam.ac.uk Adrian Weller University of Cambridge and Alan Turing Institute aw665@cam.ac.uk |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions using a third-party library ('All methods were implemented using lib DAI [8]') but does not provide a link or explicit statement about releasing its own source code for the described methodology. |
| Open Datasets | No | The paper describes generating synthetic models for experiments ('complete hypergraphs (with 8 variables) and toroidal grid models (5 x 5 variables). Potentials up to order 4 were selected randomly'), but does not refer to or provide access to a publicly available or open dataset in the traditional sense. |
| Dataset Splits | No | The paper does not specify training, validation, or test dataset splits, as the experiments involve running inference on randomly generated model instances rather than splitting a fixed dataset. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running the experiments (e.g., specific CPU/GPU models, cloud instances). |
| Software Dependencies | No | The paper states 'All methods were implemented using lib DAI [8]', but it does not provide a specific version number for lib DAI or any other ancillary software dependencies, which is required for reproducibility. |
| Experiment Setup | No | The paper describes the types of models and inference methods used (e.g., 'double loop method... which relates to generalized belief propagation, 24) and MAP inference (using loopy belief propagation, LBP [9])'), but it does not provide specific numerical hyperparameters (e.g., learning rates, batch sizes, epochs) for these methods or the heuristics. |