Estimating the Permanent by Nesting Importance Sampling

Authors: Juha Harviainen, Mikko Koivisto

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated the performance of NIS on real-world matrices (Rossi & Ahmed, 2015) licensed under CC-BY-SA and synthetic instances used by Harviainen et al. (2021) and Harviainen & Koivisto (2023). The used instantiation of our scheme, dubbed DEEPNIS, utilizes the depth-16 variant of the extended Huber bound. We also remove redundant entries from the matrix (Tassa, 2012) and apply 100 iterations of Sinkhorn balancing like Alimohammadi et al. (2023). DEEPNIS is compared against the state-of-the-art rejection sampler DEEPAR that uses the same bound but applies n2 rounds of Sinkhorn balancing only if it tightens the ratio U(A)/ per A. Additionally, they try tightening the ratio by randomized scaling of the columns. In all cases, we computed (ϵ, δ)-approximations of the permanents of the instances with δ = 0.05. The implementations are publicly available on Git Hub1. For real-world matrices, we ran both schemes with varying values of ϵ {0.005, 0.01, 0.02} on each instance independently 20 times. Each run was allowed to use 12 hours (43 200 seconds) of time. The results are reported in Table 1 containing the average times used for obtaining the estimates. Additionally, we report the ratio of the average time used by DEEPAR to DEEPNIS. If any of the runs exceeded the time limit, we report as otherwise the averages would not be comparable. The running times t are rounded in the table to a number of the form b 10k that minimizes the relative error from t. The synthetic instances are from three classes of matrices. Uniform consists of n n matrices with Uniform[0, 1]-distributed entries. Block Diagonal has blocks of 5 5 matrices with uniformly distributed entries on the main diagonal and zero elsewhere. Bernoulli has matrices where each entry is 1 with probability 0.1 and 0 otherwise. We evaluated the schemes with decreasing values of ϵ on a single instance from each class and with instances of increasing size n with ϵ = 0.01. Only the results for instances with n 20 are reported in Figure 1, since smaller instances can be solved exactly in less than a second (Ryser, 1963). We see that DEEPNIS beats DEEPAR in all instances with the ratio of running times of DEEPAR to DEEPNIS being up to two orders of magnitude.
Researcher Affiliation Academia 1Department of Computer Science, University of Helsinki, Helsinki, Finland.
Pseudocode Yes Algorithm 1 Sequential importance sampling Algorithm 2 SIS for permanent
Open Source Code Yes The implementations are publicly available on Git Hub1. 1https://github.com/Sums-of-Products/ nesting-importance-sampling/
Open Datasets Yes We evaluated the performance of NIS on real-world matrices (Rossi & Ahmed, 2015) licensed under CC-BY-SA and synthetic instances used by Harviainen et al. (2021) and Harviainen & Koivisto (2023).
Dataset Splits No The paper focuses on estimating values with accuracy guarantees (ϵ, δ)-approximation for Monte Carlo methods, not on traditional machine learning dataset splits (training, validation, testing) for model learning or hyperparameter tuning. Therefore, it does not specify validation dataset splits in the conventional sense.
Hardware Specification No The paper does not provide specific details about the hardware used, such as CPU or GPU models, memory, or cloud instance types. It only mentions running times and comparisons between algorithms.
Software Dependencies No The paper states that 'The implementations are publicly available on Git Hub', but it does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries).
Experiment Setup Yes The used instantiation of our scheme, dubbed DEEPNIS, utilizes the depth-16 variant of the extended Huber bound. We also remove redundant entries from the matrix (Tassa, 2012) and apply 100 iterations of Sinkhorn balancing like Alimohammadi et al. (2023).