Hierarchical Graph Capsule Network
Authors: Jinyu Yang, Peilin Zhao, Yu Rong, Chaochao Yan, Chunyuan Li, Hehuan Ma, Junzhou Huang10603-10611
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental studies demonstrate the effectiveness of HGCN and the contribution of each component. |
| Researcher Affiliation | Collaboration | 1University of Texas at Arlington 2Tencent AI Lab jzhuang@uta.edu |
| Pseudocode | Yes | Algorithm 1: Training process with K latent factors, L capsule layers, and R iterations of routing. |
| Open Source Code | Yes | Code: https://github.com/uta-smile/HGCN |
| Open Datasets | Yes | Eleven commonly used benchmarks including (i) seven biological graph datasets, i.e., MUTAG, NCI1, PROTEINS, D&D, ENZYMES, PTC, NCI109; and (ii) four social graph datasets, i.e., COLLAB, IMDB-Binary (IMDB-B), IMDB-Multi (IMDB-M), Reddit-BINARY (RE-B), are used in this study. |
| Dataset Splits | Yes | perform 10-fold cross-validation for performance evaluation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments. |
| Software Dependencies | No | The paper mentions adopting GCN but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We set K = 4, R = 3, λ = 0.5, β = 0.1, L = 2, and follow the same settings in previous studies (Ying et al. 2018b) to perform 10-fold cross-validation for performance evaluation. |