Lifted Weighted Mini-Bucket
Authors: Nicholas Gallo, Alexander T. Ihler
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the utility of this class of approximations, especially in models with strong repulsive potentials. |
| Researcher Affiliation | Academia | Nicholas Gallo University of California Irvine Irvine, CA 92637-3435 ngallo1@uci.edu Alexander Ihler University of California Irvine Irvine, CA 92637-3435 ihler@ics.uci.edu |
| Pseudocode | Yes | Algorithm 1 summarizes the LWMB tree construction algorithm (similar to ground mini-bucket construction [7]) developped in this section. |
| Open Source Code | No | The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | No | The paper describes how the data for the experiments was generated synthetically ('We run experiments with N = | | = 512, with clustered evidence. We randomly assign elements of to one of K = 16 clusters...'), but it does not refer to a publicly available dataset or provide access information. |
| Dataset Splits | No | The paper does not provide specific train/validation/test dataset splits, percentages, or sample counts needed to reproduce the data partitioning. It only describes the synthetic generation process for the data used. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions that 'code has been written in C++', but it does not specify any software dependencies, libraries, or solvers with version numbers required to replicate the experiments. |
| Experiment Setup | Yes | We run experiments with N = | | = 512, with clustered evidence... We randomly assign elements of to one of K = 16 clusters... Each cluster generates a (scalar) center on N(0, 2) each member of the cluster is then perturbed from its center by N(0, 0.4) noise... We call a black-box convex optimization (using non-linear conjugate gradients) allowing a maximum of 1000 function evaluations. |