Online Submodular Maximization via Adaptive Thresholds

Authors: Zhengchen Yang, Jiping Zheng

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on diverse datasets confirm that ONLINEADAPTIVE outperforms existing algorithms in both quality and efficiency.
Researcher Affiliation Academia 1College of Computer Science & Technology, Nanjing University of Aeronautics & Astronautics, Nanjing, China 2State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China {yangzc, jzh}@nuaa.edu.cn
Pseudocode Yes Algorithm 1: The ONLINEADAPTIVE algorithm
Open Source Code Yes Our code is publicly available 1. https://github.com/dcsjzh/Online Adaptive
Open Datasets Yes Forest Cover 286,048 10 [Liu et al., 2008] Credit Card Fraud 284,807 29 [Pozzolo et al., 2015] KDDCup99 48,113 79 [Campos et al., 2016] You Tube 9,010 4 [Kazemi et al., 2019] Twitter 42,104 [Kazemi et al., 2019]
Dataset Splits No The paper describes experiments on streaming data and does not specify explicit train/validation/test splits, cross-validation, or other details for data partitioning.
Hardware Specification Yes All the experiments were conducted on a machine running Ubuntu 20.04 with an Intel(R)Xeon(R) E3-1225 3.30GHz CPU and 16 GB main memory.
Software Dependencies No The paper mentions 'Ubuntu 20.04' as the operating system, but does not list specific software dependencies with version numbers for libraries or frameworks used in the experiments (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We set κ = 1 and σ = 1 2 d in our experiments, where d denotes the dimensionality of the element in each dataset. ... we set κ = 10 and σ = 1 in our experiments. ... we set c = 1 to achieve its best ratio 1 4 in our experiments. ... varying r {1, 3, 5, 7, 9} while fixing the solution size k to 30. ... varying solution size k {10, 20, 30, 40, 50}.