Watch Your Step: Learning Node Embeddings via Graph Attention

Authors: Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou, Alexander A. Alemi

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment on link prediction tasks, as we aim to produce embeddings that best-preserve the graph structure, generalizing to unseen information. We improve state-of-the-art results on a comprehensive suite of real-world graph datasets including social, collaboration, and biological networks, where we observe that our graph attention model can reduce the error by up to 20%-40%.
Researcher Affiliation Collaboration Sami Abu-El-Haija Information Sciences Institute, University of Southern California haija@isi.edu Bryan Perozzi Google AI New York City, NY bperozzi@acm.org Rami Al-Rfou Google AI Mountain View, CA rmyeid@google.com Alex Alemi Google AI Mountain View, CA alemi@google.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes To ensure repeatability of results, we have released our model and instructions6. Available at http://sami.haija.org/graph/context
Open Datasets Yes Datasets available from SNAP https://snap.stanford.edu/data. PPI [33] (C. Stark, B. Breitkreutz, T. Reguly, L. Boucher, A. Breitkreutz, and M. Tyers. Biogrid: A general repository for interaction datasets. In Nucleic Acids Research, 2006.)
Dataset Splits Yes For classification, we follow the data splits of [37].
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions using 'TensorFlow' but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes For the results Table 1, we use β = 0.5, C = 10, and P(0) = diag(80), which corresponds to 80 walks per node.