Effective Structural Encodings via Local Curvature Profiles

Authors: Lukas Fesser, Melanie Weber

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyze the effectiveness of LCP through a range of experiments, which reveal LCP s superior performance in nodeand graph-level tasks.
Researcher Affiliation Academia Lukas Fesser Faculty of Arts and Sciences Harvard University lukas fesser@fas.harvard.edu Melanie Weber Department of Applied Mathematics Harvard University mweber@g.harvard.edu
Pseudocode No The paper describes methods and calculations (e.g., LCP definition, ORC computation) in narrative text and mathematical formulas but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/Weber-GeoML/Local_Curvature_Profile
Open Datasets Yes We conduct our node classification experiments on the publicly available CORA and CITESEER Yang et al. (2016) datasets, and our graph classification experiments on the ENZYMES, IMDB, MUTAG and PROTEINS datasets from the TUDataset collection Morris et al. (2020).
Dataset Splits Yes We record the test set accuracy of the settings with the highest validation accuracy. ... We use a train/val/test split of 50/25/25.
Hardware Specification Yes Our experiments were conducted on a local server with the specifications presented in the following table. COMPONENTS SPECIFICATIONS ARCHITECTURE X86 64 OS UBUNTU 20.04.5 LTS x86 64 CPU AMD EPYC 7742 64-CORE GPU NVIDIA A100 TENSOR CORE RAM 40GB
Software Dependencies No The paper states: "We implemented all experiments in this paper in Python using PyTorch, Numpy PyTorch Geometric, and Python Optimal Transport." However, specific version numbers for these libraries are not provided.
Experiment Setup Yes Node classification. We use a GNN with 3 layers and hidden dimension 128. We further use a dropout probability of 0.5, and a ReLU activation. ... Graph classification. We use a GNN with 4 layers and hidden dimension 64. We further use a dropout probability of 0.5, and a ReLU activation. ... Unless explicitly stated otherwise, we train all models until we observe no improvements in the validation accuracy for 100 epochs using the Adam optimizer with learning rate 1e-3 and a batch size of 16.