Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Variable Elimination in the Fourier Domain
Authors: Yexiang Xue, Stefano Ermon, Ronan Le Bras, Carla, Bart Selman
ICML 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the significance of this approach by applying it to the variable elimination algorithm. Compared with the traditional bucket representation and other approximate inference algorithms, we obtain significant improvements. [...] 5. Experiments |
| Researcher Affiliation | Academia | Yexiang Xue EMAIL Cornell University, Ithaca, NY, 14853, USA Stefano Ermon EMAIL Stanford University, Stanford, CA, 94305, USA Ronan Le Bras, Carla P. Gomes, Bart Selman EMAIL Cornell University, Ithaca, NY, 14853, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any link or explicit statement about the availability of its own source code. It mentions using Lib DAI and ACE, which are third-party tools. |
| Open Datasets | Yes | We compare our inference algorithms on large benchmarks from the UAI 2010 Approximate Inference Challenge (UAI). [...] Uai 2010 approximate inference challenge. http:// www.cs.huji.ac.il/project/UAI10. |
| Dataset Splits | No | The paper mentions using UAI 2010 challenge benchmarks and generating synthetic training images, but it does not specify any train/validation/test splits for these datasets or instances. It describes the total number of training images but not how they are partitioned. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "Lib DAI (Mooij, 2010)" and "ACE (Darwiche & Marquis, 2002)" but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | For a fair comparison, we fix the size of the messages for both Fourier VE and Mini-bucket to 2^10 = 1, 024. [...] we set the message size for Fourier VE to be 1,048,576 (2^20). Because the complexity of the multiplication step in Fourier VE is quadratic in the number of coefficients, we further shrink the message size to 1,024 (2^10) during multiplication. We allow 1,000,000 steps for burn in and another 1,000,000 steps for sampling in the MCMC approach. [...] damping rate of 0.1 and the maximal number of iterations 1,000,000. [...] We train the model using contrastive divergence (Hinton, 2002), with k = 15 steps of blocked Gibbs updates, on 20, 000 such training images. |