Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
CONGREGATE: Contrastive Graph Clustering in Curvature Spaces
Authors: Li Sun, Feiyang Wang, Junda Ye, Hao Peng, Philip S. Yu
IJCAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on real-world graphs show that our model outperforms the state-of-the-art competitors. Experiments. We evaluate the superiority of our model with 19 strong competitors on 4 datasets, examine the proposed components by ablation study, and further discuss why Ricci curvature works. |
| Researcher Affiliation | Academia | 1North China Electric Power University, Beijing 102206, China 2Beijing University of Posts and Telecommunications, Beijing 100876, China 3Beihang University, Beijing 100191, China 4Department of Computer Science, University of Illinois at Chicago, IL, USA |
| Pseudocode | Yes | Algorithm 1: Training CONGREGATE |
| Open Source Code | Yes | Further details and code are provided https://github.com/Curv Cluster/Congregate. |
| Open Datasets | Yes | To evaluate the proposed model, we choose 4 public datasets, i.e., Cora and Citeseer [Devvrit et al., 2022], and larger MAG-CS [Park et al., 2022] and Amazon-Photo [Li et al., 2022]. |
| Dataset Splits | No | The paper does not provide specific percentages or sample counts for training, validation, or test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU/GPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions 'Riemannian Adam [Kochurov et al., 2020]' which implies a PyTorch dependency from the cited work, but it does not explicitly state specific version numbers for software libraries or dependencies. |
| Experiment Setup | Yes | The grid search is performed over search spaces for the hyperparameters, e.g., learning rate: [0.001, 0.003, 0.005, 0.008, 0.01], dropout rate: [0.0, 0.1, 0.2, 0.3, 0.4]. We utilize a 2-layer MLP to approximate the ο¬ne-grained curvature. In RGC loss, hyperparameter Ξ² of the reweighting is 2 as default. |