Randomized Algorithms for Submodular Function Maximization with a $k$-System Constraint
Authors: Shuang Cui, Kai Han, Tianshuai Zhu, Jing Tang, Benwei Wu, He Huang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical performance of our algorithms is extensively evaluated in several applications related to data mining and social computing, and the experimental results demonstrate the superiorities of our algorithms in terms of both utility and efficiency. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology / Su Zhou Research Institute, University of Science and Technology of China; 2Data Science and Analytics Thrust, The Hong Kong University of Science and Technology; 3School of Computer Science and Technology, Soochow University. |
| Pseudocode | Yes | Algorithm 1 RANDOMMULTIGREEDY(ℓ, p) Algorithm 2 ADAPTRANDOMGREEDY(p) |
| Open Source Code | No | The paper mentions implementing algorithms but does not provide any links or explicit statements about the availability of its source code. |
| Open Datasets | Yes | In our experiments, we use the Movie Lens dataset (Haba et al., 2020) containing 1793 movies... We perform the experiment using the CIFAR-10 dataset (Krizhevsky et al., 2009) containing ten thousands 32 32 color images. We use the Last FM Social Network (Barbieri & Bonchi, 2014; Aslay et al., 2017) with 1372 nodes and 14708 edges... |
| Dataset Splits | No | The paper describes experimental settings and parameters but does not explicitly mention train/validation/test dataset splits or cross-validation details. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for its implementation or dependencies. |
| Experiment Setup | Yes | In all experiments, we adopt the optimal settings of each implemented algorithm such that their theoretical approximation ratio is minimized (e.g., setting ℓ= 2, p = 2 1+√k for RAMG), and we set ϵ = 0.1 whenever ϵ is an input parameter for the considered algorithms. |