A Faster Practical Approximation Scheme for the Permanent

Authors: Juha Harviainen, Mikko Koivisto

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical Results We empirically compared the proposed upper bound and preprocessing method against previous state of the art.
Researcher Affiliation Academia Juha Harviainen, Mikko Koivisto University of Helsinki juha.harviainen@helsinki.fi, mikko.koivisto@helsinki.fi
Pseudocode Yes Algorithm 1: Nesting rejection sampling
Open Source Code Yes Our implementations as well as the details of the computing infrastructure are included in the supplement. https://github.com/Kalakuh/nesting
Open Datasets Yes In addition, we tested the schemes on real-world matrices from Network Data Repository (Rossi and Ahmed 2015) licensed under CC-BY-SA.
Dataset Splits No The paper does not describe traditional train/validation/test dataset splits, as its focus is on approximation algorithms for matrix permanents, evaluated based on running time for a certain approximation quality, rather than training machine learning models on partitioned data.
Hardware Specification No The paper mentions that 'details of the computing infrastructure are included in the supplement' and provides a GitHub link, but it does not explicitly specify hardware details (e.g., GPU/CPU models, memory) within the main text.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes We followed the test setup of Harviainen et al. (2021) and evaluated our implementations by estimating the running time required for getting a (0.1, 0.05)-approximation, corresponding to 388 accepted draws with the GBAS algorithm. The estimate of the time was computed based on the time required for 65 accepted draws to save computation time. The running time was limited to 4825 seconds per instance