On the Stability of Expressive Positional Encodings for Graphs

Authors: Yinan Huang, William Lu, Joshua Robinson, Yu Yang, Muhan Zhang, Stefanie Jegelka, Pan Li

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we evaluate the effectiveness of our method on molecular property prediction, and out-of-distribution generalization tasks, finding improved generalization compared to existing positional encoding methods. Our code is available at https://github.com/Graph-COM/SPE.
Researcher Affiliation Academia 1Georgia Institute of Technology, 2Purdue University, 3Stanford University, 4Tongji University, 5Peking University, 6 MIT CSAIL
Pseudocode No The paper describes the SPE architecture mathematically and illustrates it in Figure 1, but it does not include a formal pseudocode block or algorithm labeled as such.
Open Source Code Yes Our code is available at https://github.com/Graph-COM/SPE.
Open Datasets Yes We primarily use three datasets: ZINC (Dwivedi et al., 2023), Alchemy (Chen et al., 2019) and Drug OOD (Ji et al., 2023).
Dataset Splits Yes For each domain, the full dataset is divided into five partitions: the training set, the in-distribution (ID) validation/test sets, the out-of-distribution validation/test sets. ... For each task, we randomly split dataset into training, validation and test by 8:1:1.
Hardware Specification Yes GPU is Quadro RTX 6000
Software Dependencies No The paper mentions software components like PyTorch, GIN, Deep Set, MLPs, Adam optimizer, and ReLU activations, but does not provide specific version numbers for them.
Experiment Setup Yes We use Adam optimizer with an initial learning rate 0.001 and 100 warm-up steps. We adopt a linear decay learning rate scheduler. Batch size is 128 for ZINC, Alchemy and substructures counting, 64 for Drug OOD.