Block Modeling-Guided Graph Convolutional Neural Networks
Authors: Dongxiao He, Chundong Liang, Huixin Liu, Mingxiang Wen, Pengfei Jiao, Zhiyong Feng4022-4029
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate the superiority of our new approach over existing methods in heterophilic datasets while maintaining a competitive performance in homophilic datasets. |
| Researcher Affiliation | Academia | 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the open-source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on six real-world datasets with different homophily ratio. Among them, Cora, Citeseer and Pubmed (Bojchevski and G unnemann 2017; Sen et al. 2008; Namata et al. 2012) are three citation networks... Texas, Chameleon and Squirrel (Rozemberczki, Allen, and Sarkar 2021) are three webpage datasets... |
| Dataset Splits | Yes | For all datasets, we use the same splits with Geom-GCN (Pei et al. 2020) and measure the performance of all models on the test sets over 10 random splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud instance types used for running its experiments. |
| Software Dependencies | No | The paper mentions various models and methods but does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch x.x, TensorFlow x.x). |
| Experiment Setup | Yes | For our proposed method, we set the number of GCN layers k to 2 for Texas and 3 for the other five datasets. We set the balance parameter of loss λ to 0.5, dropout ratio to 0.5, learning rate to 0.001, and weight decay to 0.0005. We search on the enhancement factor α and self-loop coefficient β from 0 to 4 for datasets. |