Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Bridging Graph Position Encodings for Transformers with Weighted Graph-Walking Automata

Authors: Patrick Soga, David Chiang

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the performance of GAPE with other PE schemes on both machine translation and graph-structured tasks, and we show that it generalizes and connects with several other PEs. An additional contribution of this study is a theoretical and controlled experimental comparison of many recent PEs in graph transformers, independent of the use of edge features.
Researcher Affiliation Academia Patrick Soga EMAIL Department of Computer Science and Engineering University of Notre Dame David Chiang EMAIL Department of Computer Science and Engineering University of Notre Dame
Pseudocode No The paper describes mathematical formulations and definitions for weighted graph-walking automata and position encodings (e.g., Definition 1, Definition 2, Equation 2), but does not contain a clearly structured pseudocode or algorithm block.
Open Source Code No To verify the above proposition, we ran an MT experiment benchmarking several graph PE schemes and compared their performance with GAPE using the open-source Transformer implementation Witwicky,2 with default settings.2https://github.com/tnq177/witwicky. The paper refers to a third-party open-source implementation used for comparison, but does not provide access to the authors' own implementation of GAPE.
Open Datasets Yes ZINC (Irwin et al., 2012) is graph regression dataset... CSL (Murphy et al., 2019) is a graph classification dataset... CYCLES (Murphy et al., 2019; Loukas, 2020) is a cycle detection dataset... PATTERN and CLUSTER (Dwivedi et al., 2020; Abbe, 2018) are node classification datasets... PCQM4Mv2 is a dataset from the Open Graph Benchmark Large-Scale Challenge (OGB-LSC) (Hu et al., 2021)...
Dataset Splits Yes For CSL, following Murphy et al. (2019), we perform a 5-fold cross validation split with 5 sets of train, validation, and test data and report the average accuracy. We use the same train, test, and validation splits as Loukas (2020) for CYCLES, using 200 graphs for training, 1000 graphs for validation and 10,000 graphs for test. For PATTERN and CLUSTER, we use the same splits as Dwivedi et al. (2020) with 10,000 graphs for train and 2,000 graphs for validation and test. For PLANAR, We use 7,000 graphs for training and 1,500 for validation and testing. For PCQM4Mv2, the train set is comprised of 3,378,606 molecules while the validation set has 73,545 molecules.
Hardware Specification Yes Results are averaged across 4 runs on an Ubuntu 22.04 LTS desktop equipped with an AMD Ryzen 7 3700X 8-core CPU and 32GB DDR4 RAM.
Software Dependencies No To solve Eq. (2), we use an implementation of the Bartels-Stewart algorithm from Sci Py (Virtanen et al., 2020). Across all tasks, we use the Adam (Kingma & Ba, 2015) optimizer. While SciPy and Adam are mentioned, specific version numbers for these or any other software libraries/frameworks (e.g., Python, PyTorch/TensorFlow) are not provided.
Experiment Setup Yes For GAPE, on MT, we used PE dimension k=512 and a damping factor γ=1. On the graph-level tasks, we use k=32 and γ=0.02. For nearly all tasks, we use 10 layers in the graph transformer, 80 node feature dimensions, 8 attention heads, learning rate of 0.005, reduce factor of 0.5, and patience of 10. For ZINC, we follow Dwivedi et al. (2020) and use a learning rate of 0.007 and patience of 15. CSL is such a small dataset that we opted to shrink the number of transformer layers down to 6.