Fast Maximization of Non-Submodular, Monotonic Functions on the Integer Lattice

Authors: Alan Kuhnle, J. David Smith, Victoria Crawford, My Thai

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate our proposed algorithms for the GIM problem defined in Section 5. Source code for the implementation is available at https://gitlab.com/emallson/lace. We evaluate our algorithms as compared with Standard Greedy; by the naive reduction of the lattice to sets in exponential time, this algorithm is equivalent to performing this reduction and running the standard greedy for sets, the performance of which for non-submodular set functions was analyzed by Bian et al. (2017b). In Section 6.1, we describe our methodology; in Section 6.2, we compare the algorithms and non-submodularity parameters.
Researcher Affiliation Academia 1University of Florida, Gainesville, Florida. Correspondence to: Alan Kuhnle <kuhnle@ufl.edu>, My T. Thai <mythai@ufl.edu>.
Pseudocode Yes Algorithm 1 Standard Greedy, Algorithm 2 Threshold Greedy, Algorithm 3 Fast Greedy
Open Source Code Yes Source code for the implementation is available at https://gitlab.com/emallson/lace.
Open Datasets Yes We evaluate on two networks taken from the SNAP dataset (Leskovec & Krevl, 2014): ca-Gr Qc ( Gr Qc ; 15k nodes, 14.5K edges) and facebook ( Facebook ; 4k nodes, 176K edges).
Dataset Splits No The paper uses Monte Carlo sampling but does not specify any training, validation, or test dataset splits. No information on data partitioning for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions Monte Carlo sampling and provides a link to source code, but it does not specify software dependencies (e.g., libraries, frameworks, or programming language versions) required to reproduce the experiments.
Experiment Setup Yes Our implementation uses Monte Carlo sampling to estimate the objective value A(x), with 10 000 samples used. ... Unless noted otherwise, we use default settings of ε = 0.05, δ = 0.9, κ = 0.95.