DPSampler: Exact Weighted Sampling Using Dynamic Programming

Authors: Jeffrey M. Dudek, Aditya A. Shrotri, Moshe Y. Vardi

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Empirical Evaluation We seek to answer the following questions in our study: 1. How close is the distribution generated by DPSampler to that of an ideal sampler? 2. How does the new top-down ADD-sampling algorithm (Alg. 1) perform compared to the bottom-up procedure of [Chakraborty et al., 2020]? 3. How does DPSampler perform compared to the state-of-the-art, especially on low-treewidth instances?
Researcher Affiliation Academia Jeffrey M. Dudek , Aditya A. Shrotri , Moshe Y. Vardi Rice University {jmd11, as128, vardi}@rice.edu
Pseudocode Yes Algorithm 1 sample From ADD(f, w, σ)
Open Source Code Yes Code, results and full version of the text is available at https: //www.gitlab.com/Shrotri/dpsampler
Open Datasets Yes We tested the two implementations on the suite of 1945 benchmarks used in [Dudek et al., 2020b]
Dataset Splits No The paper uses pre-existing benchmark sets and does not describe explicit training, validation, or test dataset splits as commonly found in machine learning experiments.
Hardware Specification Yes Each experiment had exclusive access to one node comprising of 16 cores (32 threads) with an Intel Xeon E5-2650 v2 processor running at 2.6 GHz, with memory capped at 30 GB.
Software Dependencies Yes We used GCC 9.4.0 for compiling DPSampler with Ofast flag enabled, along with CUDD [Somenzi, 2012] library version 3.0.
Experiment Setup Yes A benchmark is solved by a tool if the tool is able to generate 5000 samples within a timeout of 1000 seconds. We treat both timeouts and memouts as failures.