HoloNets: Spectral Convolutions do extend to Directed Graphs

Authors: Christian Koke, Daniel Cremers

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments in real world settings, showcasing that directed spectral convolutional networks provide new state of the art results for heterophilic node classification on many datasets and as opposed to baselines may be rendered stable to resolution-scale varying topological perturbations.
Researcher Affiliation Academia Christian Koke & Daniel Cremers Technical University Munich and Munich Center for Machine Learning {christian.koke,cremers}@tum.de
Pseudocode Yes We additionally provide a pseudocode description of the corresponding models in Appendix J. ... Algorithm 1: The forward function of Holo Nets
Open Source Code Yes Our code is available at https://github.com/Christian Koke/Holo Nets.
Open Datasets Yes We evaluate on the task of node classification on several directed benchmark datasets with high homophily: Chameleon & Squirrel (Pei et al., 2020), Arxiv-Year (Hu et al., 2020b), Snap Patents (Lim et al., 2021) and Roman-Empire (Platonov et al., 2023). ... We utilize the QM7 dataset (Rupp et al., 2012)
Dataset Splits Yes For OGBN-Arxiv we use the fixed split provided by OGB (Hu et al., 2020b), for Chameleon and Squirrel we use the fixed GEOM-GCN splits (Pei et al., 2020), for Arxiv-Year and Snap-Patents we use the splits provided in Lim et al. (2021), while for Roman-Empire we use the splits from Platonov et al. (2023). ... We shuffle the dataset and randomly select 1500 molecules for testing. We then train on the remaining graphs.
Hardware Specification Yes All experiments are conducted on a machine with NVIDIA A4000 GPU with 16GB of memory, safe for experiments on snap-patents which have been performed on a machine with one NVIDIA Quadro RTX 8000 with 48GB of memory.
Software Dependencies No The paper mentions using the Adam optimizer and implies Python through the provided code repository, but it does not list specific software dependencies with version numbers (e.g., PyTorch 1.9, scikit-learn 0.24).
Experiment Setup Yes In all experiments, we use the Adam optimizer and train the model for 10000 epochs, using early stopping on the validation accuracy with a patience of 200 for all datasets apart from Chameleon and Squirrel, for which we use a patience of 400. ... Our search space for generic hyperparameters is given by varying the learning rate lr P t0.01, 0.005, 0.001, , 0.0005u, the hidden dimension over F P t32, 64, 128, 256, 512u, ne number of layers over L P t2, 3, 4, 5, 6u... Final selected hyperparameters are listed in Table 7.