Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating

Authors: Yixin Liu, Yizhen Zheng, Daokun Zhang, Vincent CS Lee, Shirui Pan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on 14 benchmark datasets and multiple learning scenarios to demonstrate the superiority of GREET. Experiments Experimental Settings Datasets. We take transductive node classification as the downstream task to evaluate the effectiveness of the learned representations. Our experiments are conducted on 14 commonly used benchmark datasets...
Researcher Affiliation Academia Yixin Liu1, Yizhen Zheng1, Daokun Zhang1, Vincent CS Lee1, Shirui Pan2 1 Monash University, Australia 2 Griffith University, Australia {yixin.liu, yizhen.zheng1, daokun.zhang, vincent.cs.lee}@monash.edu, s.pan@griffith.edu.au
Pseudocode Yes We present the algorithmic description in Appendix B.
Open Source Code Yes The code of GREET is available at https://github.com/yixinliu233/GREET.
Open Datasets Yes Our experiments are conducted on 14 commonly used benchmark datasets, including 8 homophilic graph datasets (i.e., Cora, Cite Seer, Pub Med, Wiki-CS, Amazon Computer, Amazon Photo, Co Author CS, and Co Author Physics (Sen et al. 2008; Mernyei and Cangea 2020; Shchur et al. 2018)) and 6 heterophilic graph datasets (i.e., Chameleon, Squirrel, Actor, Cornell, Texas, and Wisconsin (Pei et al. 2020)). We split all datasets following the public splits (Yang, Cohen, and Salakhudinov 2016; Kipf and Welling 2017; Pei et al. 2020) or commonly used splits (Zhu et al. 2021; Thakoor et al. 2022). The details of datasets are summarized in Appendix D.
Dataset Splits Yes We split all datasets following the public splits (Yang, Cohen, and Salakhudinov 2016; Kipf and Welling 2017; Pei et al. 2020) or commonly used splits (Zhu et al. 2021; Thakoor et al. 2022). We conduct grid search to choose the best hyper-parameters on validation set.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU model, CPU type, memory size) used for running its experiments. A footnote in Table 1 mentions 'OOM indicates Out-Of-Memory on a 24GB GPU,' which hints at the memory capacity of a GPU used for some baselines, but this is not a specification of their own experimental hardware.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python, PyTorch, TensorFlow, or other libraries).
Experiment Setup No The paper states, 'Specific hyper-parameter settings and more implementation details are in Appendix E.' While this indicates the information exists, it is not provided within the main text of the paper.