Interpretable Clustering on Dynamic Graphs with Recurrent Graph Neural Networks

Authors: Yuhang Yao, Carlee Joe-Wong4608-4616

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We finally demonstrate that the proposed architectures perform well on real data compared to state-of-the-art graph clustering algorithms. Experimental results on real and simulated datasets that show our proposed RNN-GCN architectures outperform state-of-the-art graph clustering algorithms.
Researcher Affiliation Academia Yuhang Yao, Carlee Joe-Wong Carnegie Mellon University {yuhangya,cjoewong}@andrew.cmu.edu
Pseudocode Yes Algorithm 1: RNNGCN
Open Source Code Yes The code of all methods and datasets are publicly available4. 4https://github.com/Interpretable Clustering/Interpretable Clustering
Open Datasets Yes We conducted experiments on five real datasets, as shown in Table 1, which have the properties shown in Table 2. ... DBLP-E dataset is extracted from the computer science bibliography website DBLP1, which provides open bibliographic information... 1https://dblp.org ... Reddit dataset is generated from Reddit2... 2https://www.reddit.com/ ... Brain dataset is generated from functional magnetic resonance imaging (f MRI) data3. 3https://tinyurl.com/y4hhw8ro
Dataset Splits Yes We divide each dataset into 70% training/ 20% validation/ 10% test points.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'Re LU' but does not specify version numbers for any programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes Each method uses two hidden Graph Neural Network layers (GCN, GAT, Graph Sage, etc.) with the layer size equal to the number of classes in the dataset. We add a dropout layer between the two layers with dropout rate 0.5. We use the Adam optimizer with learning rate 0.0025. Each method is trained with 500 iterations.