Bridging the Gap between General and Down-Closed Convex Sets in Submodular Maximization
Authors: Loay Mualem, Murad Tukan, Moran Feldman
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also empirically demonstrate the superiority of our proposed algorithms across three offline and two online applications. Our offline and online algorithms...can be found in Sections 4 and 5, respectively. Finally, Section 6 compares the empirical performance of our algorithms on multiple machine learning applications with the performance of previously suggested algorithms from the literature. |
| Researcher Affiliation | Collaboration | Loay Mualem1 , Murad Tukan2 and Moran Feldman1 1University of Haifa 2Data Heroes loaymua@gmail.com, murad@dataheroes.ai, moranfe@cs.haifa.ac.il |
| Pseudocode | Yes | Algorithm 1 Frank-Wolfe/Continuous-Greedy Hybrid for Known F(o(1) D ) Algorithm 2 Online Frank-Wolfe/Continuous -Greedy Hybrid |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | The first dataset is sourced from a Facebook network [Viswanath et al., 2009], encompassing 64K users (vertices) and 1M unweighted relationships (edges). The second dataset is based on the Advogato network [Massa et al., 2009], comprising 6.5K users (vertices) and 61K weighted relationships (edges). |
| Dataset Splits | No | The paper mentions using specific datasets but does not provide explicit details on training, validation, or test splits. It describes setting a number of time steps for online experiments and specific graph sizes, but not data partitioning for model training and evaluation in a typical train/validation/test sense. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions that algorithms were implemented but does not provide specific software names with version numbers for dependencies (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | In both algorithms, we have set the number of online linear optimizers used to be 100 (which corresponding to setting the error parameter ε to 0.01 in our algorithm and to ln 2/100 in the algorithm of Mualem & Feldman). We set the number of time steps to L = 1000, with p = 0.0001. The above objective functions are optimized subject to the constraint 0.1 P i xi 1. Both algorithms have been executed for T = 100 iterations, and the error parameter ε was set accordingly. |