Higher-Order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes

Authors: Yiming Huang, Yujie Zeng, Qiang Wu, Linyuan Lü

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical analysis highlights Hi GCN s advanced expressiveness, supported by empirical performance gains across various tasks. Additionally, our empirical investigations reveal that the proposed model accomplishes state-of-the-art performance on a range of graph tasks and provides a scalable and flexible solution to explore higherorder interactions in graphs.
Researcher Affiliation Academia Yiming Huang1,2,*, Yujie Zeng1,2,*, Qiang Wu3, , Linyuan L u4,1,2, 1Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China 2 Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China 3 Research Institute of Electronic Science and Technology, University of Electronic Science and Technology of China 4School of Cyber Science and Technology, University of Science and Technology of China {yiming huang, yujie zeng}@std.uestc.edu.cn, {qiang.wu, linyuan.lv}@uestc.edu.cn
Pseudocode No The paper describes algorithms and derivations but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Codes and datasets are available at https://github.com/Yiminghh/Hi GCN.
Open Datasets Yes We perform the node classification task employing five homogeneous graphs, encompassing three citation graphs Cora, Cite Seer, Pub Med (Yang, Cohen, and Salakhudinov 2016) and two Amazon co-purchase graphs, Computers and Photo (Shchur et al. 2018). Additionally, we include five heterogeneous graphs, namely Wikipedia graphs Chameleon and Squirrel (Rozemberczki, Allen, and Sarkar 2021), the Actor co-occurrence graph, and the webpage graphs Texas and Wisconsin from Web KB (Pei et al. 2020).
Dataset Splits Yes We randomly partition the node set into train/validation/test subsets with a ratio of 60%/20%/20%, and repeat the experiments 100 times.
Hardware Specification Yes All experiments are conducted on an NVIDIA RTX 3090 GPU with 24GB of memory and 12th Gen Intel(R) Core(TM) i9-12900K.
Software Dependencies Yes All experiments are implemented by PyTorch 2.1.0 with Python 3.10.8.
Experiment Setup Yes Detailed data introduction and experimental settings are deferred to Appendices H and I, respectively. In Appendix I: "We use the Adam optimizer with a learning rate of 0.001 and a weight decay of 0.0005. The batch size is set to 256 for node classification tasks and 32 for graph classification tasks. The maximum number of epochs is 500 for all tasks, and we use early stopping with a patience of 50 epochs. We train the model for 100 runs and report the mean accuracy."