Capacity Releasing Diffusion for Speed and Locality
Authors: Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluation demonstrates improved results, in particular for realistic social graphs where there are moderately good but not very good clusters. and 4. Empirical Illustration We have compared the performance of the CRD algorithm (Algorithm 1), the Andersen-Chung-Lang local spectral algorithm (ACL) (2006), and the flow-improve algorithm (Flow Imp) (Andersen & Lang, 2008). |
| Researcher Affiliation | Academia | 1EECS, UC Berkeley, Berkeley, CA, USA 2ICSI and Statistics, UC Berkeley, Berkeley, CA, USA 3Computer Science, University of Vienna, Vienna, Austria. |
| Pseudocode | Yes | Algorithm 1 CRD Algorithm(G, vs, φ, τ, t) and Algorithm 2 CRD-inner(G,m( ),φ) |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | For the 4 real-world graphs, we use the Facebook college graphs of John Hopkins (Hop.), Rice, Simmons (Sim.), and Colgate, as introduced in Traud et al. (2012). |
| Dataset Splits | No | The paper does not provide specific details on dataset splits (e.g., exact percentages, sample counts, or cross-validation setup) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions conceptual parameters for the CRD algorithm but does not provide concrete experimental setup details such as specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed training configurations typically found in ML experiments. |