Cardinality constrained submodular maximization for random streams
Authors: Paul Liu, Aviad Rubinstein, Jan Vondrak, Junyao Zhao
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we show that the algorithms are simple to implement and work well on real world datasets. ... 5 Experimental results |
| Researcher Affiliation | Academia | Paul Liu Department of Computer Science Stanford University paul.liu@stanford.edu Aviad Rubinstein Department of Computer Science Stanford University aviad@stanford.edu Jan Vondr ak Department of Mathematics Stanford University jvondrak@stanford.edu Junyao Zhao Department of Computer Science Stanford University junyaoz@stanford.edu |
| Pseudocode | Yes | Algorithm 1 Partitioning stream E into m windows. ... Algorithm 2 MONOTONESTREAM(f, E, k, α) ... Algorithm 3 NONMONOTONESTREAM(f, E, k, α) |
| Open Source Code | Yes | All code can be found at https://github.com/where-is-paul/submodular-streaming |
| Open Datasets | Yes | Our datasets are drawn from set coverage instances from the 2003 and 2004 workshops on Frequent Itemset Mining Implementations [o DM04] and the Steiner triple instances of Beasley [Bea87]. ... all datasets can be found at https://tinyurl.com/neurips-21. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions averaging across 10 random stream orderings, which describes experimental runs, not data splits. |
| Hardware Specification | Yes | All experiments were performed on a 2.7 GHz dual-core Intel i7 CPU with 16 GB of RAM. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the general experimental setup including benchmarks and datasets used, but it does not specify concrete hyperparameter values or training configurations for its own algorithms (e.g., learning rates, batch sizes, number of epochs). |