Representation Learning on Graphs with Jumping Knowledge Networks

Authors: Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance.
Researcher Affiliation Academia 1Massachusetts Institute of Technology (MIT) 2National Institute of Informatics, Tokyo.
Pseudocode No No pseudocode or algorithm blocks are provided in the paper.
Open Source Code No The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a link to a repository or mention code in supplementary materials.
Open Datasets Yes We evaluate JK-Nets on four benchmark datasets. (I) The task on citation networks (Citeseer, Cora) (Sen et al., 2008) is to classify academic papers into different subjects. ... (II) On Reddit (Hamilton et al., 2017)... (III) For protein-protein interaction networks (PPI) (Hamilton et al., 2017)...
Dataset Splits Yes We split nodes in each graph into 60%, 20% and 20% for training, validation and testing. ... 20 graphs are used for training, 2 graphs are used for validation and the rest for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments, only mentioning 'GPU memory constraints' generally.
Software Dependencies No The paper mentions the Adam optimizer but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries).
Experiment Setup Yes Throughout the experiments, we use the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.005. We fix the dropout rate to be 0.5, the dimension of hidden features to be within {16, 32}, and add an L2 regularization of 0.0005 on model parameters. ... These models are trained with Batch-size 2 and Adam optimizer with learning rate of 0.005.