Faster and Scalable Algorithms for Densest Subgraph and Decomposition

Authors: Elfarouk Harb, Kent Quanrud, Chandra Chekuri

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our algorithm on real and synthetic data sets and show that it provides a significant benefit over previous algorithms. 6. Experimental Evaluation
Researcher Affiliation Academia Harb, Elfarouk eyharb2@illinois.edu University of Illinois at Urbana-Champaign Quanrud, Kent krq@purdue.edu Purdue University Chekuri, Chandra chekuri@illinois.edu University of Illinois at Urbana-Champaign
Pseudocode Yes Algorithm 1 Dense graph decomposition (left) and dense supermodular set decomposition (right) Algorithm 2 Basic Proximal Gradient Method (left) and accelerated FISTA (right)
Open Source Code Yes Code is in supplemental section, Data available online (not our data to include)
Open Datasets Yes We ran experiments on 7 real world datasets (6 from the SNAP database [47], and 1 from [48]), and one tailored synthetic dataset (used for clarifying an important difference between all algorithms) for a total of 8 datasets. The dataset information is summarized in the table below. The CLOSE-CLIQUES dataset consists of the complete bipartite graph Kd,D for d = 30 and D = 2000, and 20 copies of the complete graph Kh where h = 60. [47] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014. [48] J erôme Kunegis. Konect: The koblenz network collection. In Proceedings of the 22nd International Conference on World Wide Web, WWW 13 Companion, page 1343 1350, New York, NY, USA, 2013.
Dataset Splits No The paper evaluates algorithm performance on entire graphs, not via explicit training/validation/test dataset splits typically used for machine learning models.
Hardware Specification Yes We ran our experiments on a Slurm-based university campus cluster. For all machines, we requested 1 node and 16 cores per experiment. The nodes had 64 GB of RAM and Xeon PHI 5100 CPUs.
Software Dependencies No All algorithms were implemented in C++17 and were compiled with O3 and UNROLL-LOOPS optimizations. FISTA-PARALLEL used Open MPI [49] for parallelism.
Experiment Setup No The paper describes the experimental evaluation process (e.g., number of iterations for monitoring density, wall-clock time) and the algorithmic details, but it does not list explicit numerical values for typical hyperparameters like learning rate, batch size, or other optimization settings directly as part of the experimental setup.