The Triangle-Densest-K-Subgraph Problem: Hardness, Lovász Extension, and Application to Document Summarization

Authors: Aritra Konar, Nicholas D. Sidiropoulos4075-4082

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our approaches attain state-of-theart performance on real-world graphs and can offer substantially improved exploration of the optimal density-size curve compared to sophisticated approximation baselines for Dk S. We use document summarization to showcase why TDk S is a useful generalization of Dk S.
Researcher Affiliation Academia Aritra Konar, and Nicholas D. Sidiropoulos Department of Electrical and Computer Engineering, University of Virginia, VA, USA (aritra,nikos)@virginia.edu
Pseudocode Yes Algorithm 1: MIRROR DESCENT Input: Triangle list , triangle weights {wt}t , subgraph size k, bisection tolerance ϵ > 0. Initialize: x1 = (k/n)1, r = 1. 1: while Convergence criterion is not met do 2: Obtain gr f L(xr). 3: Update step-size βr = c/ r. 4: yr := xr exp( βrgr). 5: xr+1 = BISECTION(yr, k, ϵ). 6: Update r = r + 1. 7: end while 8: return x L = (1/r)Pr i=1 xi
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the described methodology.
Open Datasets Yes We used a collection of graph datasets (summarized in Table 2) from standard repositories (Leskovec and Krevl 2014; Kunegis 2013) to test the performance of all methods.
Dataset Splits No The paper mentions using a 'collection of graph datasets' but does not specify any training, validation, or testing splits for these datasets.
Hardware Specification No The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., programming languages, libraries, frameworks).
Experiment Setup No While the paper describes the Mirror Descent algorithm and its theoretical properties, it does not provide concrete hyperparameter values (e.g., specific 'c' for learning rate, number of iterations) or other detailed training configurations for the experimental setup.