Deep Bucket Elimination
Authors: Yasaman Razeghi, Kalev Kask, Yadong Lu, Pierre Baldi, Sakshi Agarwal, Rina Dechter
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results show that DBE is overall significantly more accurate than WMB especially on hard instances and even when the latter is provided the most generous memory resources feasible. |
| Researcher Affiliation | Academia | University of California, Irvine {yrazeghi, kkask, yadongl1, pfbaldi, sakshia1, dechter}@ics.uci.edu |
| Pseudocode | Yes | Algorithm 1: [Deep] Bucket Elimination (DBE) Algorithm 2: approximate-NN(λ, ϵ) |
| Open Source Code | Yes | We provided the source code to reproduce the results of this paper at https://github.com/dechterlab/DBE. |
| Open Datasets | Yes | We carried our experiments on instances selected from three well-known benchmarks from the UAI repository used in [Kask et al., 2020] such as grids (vision domain), Pedigrees (from genetic linkage analysis), and DBNs. |
| Dataset Splits | Yes | Once the samples are available, we split them into training (80%), validation (10%), and test sets (10%). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions the Adam optimizer and Sherpa software but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We train the network using the Adam optimizer [Kingma and Ba, 2014] with a learning rate of 0.001 and a batchsize of 256. In all the experiments, we used 5 105 samples for training the NNs with an error bound of ϵ = 10 6. The #epochs was bounded at 100. |