Parallel Double Greedy Submodular Maximization

Authors: Xinghao Pan, Stefanie Jegelka, Joseph E Gonzalez, Joseph K. Bradley, Michael I Jordan

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement and evaluate both algorithms on multi-core hardware and billion edge graphs, demonstrating both the scalability and tradeoffs of each approach.
Researcher Affiliation Academia 1Department of Electrical Engineering and Computer Science, and 2Department of Statistics University of California, Berkeley, Berkeley, CA USA 94720 {xinghao,stefje,jegonzal,josephkb,jordan}@eecs.berkeley.edu
Pseudocode Yes Algorithm 3: Ser-2g: serial double greedy, Algorithm 4: CF-2g: coord-free double greedy, Algorithm 5: CC-2g: concurrency control, Algorithm 6: CC-2g get Guarantee(e), Algorithm 7: CC-2g propose, Algorithm 8: CC-2g: commit(e, i, ue, result)
Open Source Code No The paper does not provide any statement or link regarding the public release of its source code.
Open Datasets Yes Our parallel algorithms were tested on the max graph cut and set cover problems with two synthetic graphs and three real datasets (Table 1). Friendster [21] Arabic-2005 [22, 23, 24] UK-2005 [22, 23, 24] IT-2004 [22, 23, 24].
Dataset Splits No The paper applies its algorithms to entire graph datasets and does not mention specific train/validation/test splits for the input data in the traditional machine learning sense.
Hardware Specification Yes Experiments were conducted on Amazon EC2 using one cc2.8xlarge machine, up to 16 threads, for 10 repetitions.
Software Dependencies No We implemented the parallel and serial double greedy algorithms in Java / Scala.
Experiment Setup Yes Experiments were conducted on Amazon EC2 using one cc2.8xlarge machine, up to 16 threads, for 10 repetitions. ... We randomly permuted the ordering of vertices.