Approximating the Permanent by Sampling from Adaptive Partitions

Authors: Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, Stefano Ermon

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We find that ADAPART can provide empirical speedups exceeding 30x over prior sampling methods on matrices that are challenging for variational based approaches.
Researcher Affiliation Collaboration 1Stanford University 2Allen Institute for Artificial Intelligence {kuck,trid,hamidrt,ermon}@stanford.edu, ashishs@allenai.org
Pseudocode Yes Algorithm 1 describes our proposed method, ADAPART.
Open Source Code No The paper does not include an explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We sampled 10 cycle covers from distributions arising from graphs3 in the fields of cheminformatics, DNA electrophoresis, and power networks and report mean runtimes in Table 1. 3Matrices available at http://networkrepository.com.
Dataset Splits No The paper mentions using 'random matrices' and 'synthetic multi-target tracking data' and 'real-world matrices' but does not provide specific details on how these datasets were split into training, validation, and test sets (e.g., percentages, sample counts, or explicit split methodologies).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. It only vaguely mentions 'a modern laptop'.
Software Dependencies No The paper mentions software like 'Cython implementation' and 'matlab code' but does not specify version numbers for any key software components or libraries used in their experiments.
Experiment Setup No The paper describes the methodology and data generation processes, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, number of epochs) or other system-level training configurations for the models or algorithms used in the experiments.