Scaling Fine-grained Modularity Clustering for Massive Graphs
Authors: Hiroaki Shiokawa, Toshiyuki Amagasa, Hiroyuki Kitagawa
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Evaluation In this section, we experimentally evaluate the efficiency and the clustering accuracy of g Scarf. We compared g Scarf with the following state-of-the-art graph clustering baseline algorithms: |
| Researcher Affiliation | Academia | Hiroaki Shiokawa , Toshiyuki Amagasa and Hiroyuki Kitagawa Center for Computational Sciences, University of Tsukuba, Japan {shiokawa, amagasa, kitagawa}@cs.tsukuba.ac.jp |
| Pseudocode | Yes | Algorithm 1 Proposed method: g Scarf |
| Open Source Code | No | The paper does not include a statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We used six real-world graphs published by SNAP [Leskovec and Krevl, 2014] and LAW [Boldi et al., 2011]. [...] In our experiments, we also used synthetic graphs with their ground-truth clusters generated by LFR-benchmark [Lancichinetti et al., 2009]. |
| Dataset Splits | No | The paper mentions using real-world graphs and LFR-benchmark graphs, but it does not specify any training, validation, or test splits. It evaluates clustering quality against ground-truth clusters directly. |
| Hardware Specification | Yes | All experiments were conducted on a Linux server with Intel Xeon E5-2690 CPU 2.60 GHz and 128 GB RAM. |
| Software Dependencies | No | The paper states that 'All algorithms were implemented in C/C++ as a single-threaded program' but does not specify any software libraries, frameworks, or their version numbers. |
| Experiment Setup | Yes | We set ϵ = 0.6 and µ = 5 as the same settings as used in [Chang et al., 2017]. [...] We used the recommended value θ = 0.06. |