Clustering from Labels and Time-Varying Graphs
Authors: Shiau Hong Lim, Yudong Chen, Huan Xu
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical findings are further supported by empirical results on both synthetic and real data. |
| Researcher Affiliation | Academia | Shiau Hong Lim National University of Singapore mpelsh@nus.edu.sg Yudong Chen EECS, University of California, Berkeley yudong.chen@eecs.berkeley.edu Huan Xu National University of Singapore mpexuh@nus.edu.sg |
| Pseudocode | No | The paper describes algorithms mathematically and in prose but does not include a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the methodology or provide a link to a code repository. |
| Open Datasets | Yes | For real data, we use the Reality Mining dataset [26], which contains individuals from two main groups, the MIT Media Lab and the Sloan Business School, which we use as the ground-truth clusters. The dataset records when two individuals interact, i.e., become proximal of each other or make a phone call, over a 9-month period. We choose a window of 14 weeks (the Fall semester) where most individuals have non-empty interaction data. These consist of 85 individuals with 25 of them from Sloan. |
| Dataset Splits | No | The paper mentions 'In each trial, the in/cross-cluster distributions are estimated from a fraction of randomly selected pairwise interaction data,' but does not specify explicit dataset split percentages (e.g., 80/10/10) or absolute sample counts for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using and adapting the ADMM algorithm by [25] but does not specify any software names with version numbers for reproducibility (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The paper describes general experimental procedures like 'We perform 100 trials' and how data distributions are estimated, but it does not provide specific details on hyperparameters, training configurations, or system-level settings (e.g., learning rate, batch size, optimizer settings). |