Communication-Optimal Distributed Dynamic Graph Clustering

Authors: Chun Jiang Zhu, Tan Zhu, Kam-Yiu Lam, Song Han, Jinbo Bi5957-5964

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on both synthetic and real-life datasets which confirmed the communication efficiency of our approach over baseline algorithms while achieving comparable clustering results.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, USA {chunjiang.zhu, tan.zhu, song.han, jinbo.bi}@uconn.edu 2Department of Computer Science, City University of Hong Kong, Hong Kong, PRC cskylam@cityu.edu.hk
Pseudocode Yes Algorithm 1: D2-CABL at Time Point τ
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets No The paper describes the 'Gaussians dataset' and 'Sculpture dataset' used for experiments, but it does not provide concrete access information (e.g., a direct link, DOI, or formal citation for public availability) for these datasets. Footnotes for other data sources are motivational examples, not the experimental datasets.
Dataset Splits No The paper does not specify exact train/validation/test split percentages, absolute sample counts for splits, or reference predefined splits with citations for reproducibility.
Hardware Specification Yes We implemented all five algorithms in Matlab programs, and conducted the experiments on a machine equipped with Intel i7 7700 2.8GHz CPU, 8G RAM and 1T disk storage.
Software Dependencies No The paper states, 'We implemented all five algorithms in Matlab programs,' but it does not provide specific version numbers for Matlab or any other software libraries or dependencies used.
Experiment Setup Yes As the baseline setting, we selected the total number of time points t = 10 and the total number of sites s = 30. We randomly chose 5% of edges to delete at a random time point after their arrival.