Principal Neighbourhood Aggregation for Graph Nets

Authors: Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, Petar Veličković

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we compare the capacity of different models to capture and exploit the graph structure via a novel benchmark containing multiple tasks taken from classical graph theory, alongside existing benchmarks from realworld domains, all of which demonstrate the strength of our model.
Researcher Affiliation Collaboration Gabriele Corso University of Cambridge gc579@cam.ac.uk Luca Cavalleri University of Cambridge lc737@cam.ac.uk Dominique Beaini In Vivo AI dominique@invivoai.com Pietro Liò University of Cambridge pietro.lio@cst.cam.ac.uk Petar Veliˇckovi c Deep Mind petarv@google.com
Pseudocode No The paper describes methods using mathematical equations and descriptions, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes The code for all the aggregators, scalers, models (in Py Torch, DGL and Py Torch Geometric frameworks), architectures, multi-task dataset generation and real-world benchmarks is available here.
Open Datasets Yes To further demonstrate the performance of our model, we also run tests on recently proposed realworld GNN benchmark datasets [5, 22] with tasks taken from molecular chemistry and computer vision.
Dataset Splits Yes Learning rates, weight decay, dropout and other hyperparameters were tuned on the validation set.
Hardware Specification No The paper does not specify the exact hardware components (e.g., specific GPU or CPU models) used for running the experiments.
Software Dependencies No The paper mentions software frameworks like 'Py Torch, DGL and Py Torch Geometric frameworks' but does not provide specific version numbers for these dependencies.
Experiment Setup Yes We trained the models using the Adam optimizer for a maximum of 10,000 epochs, using early stopping with a patience of 1,000 epochs. Learning rates, weight decay, dropout and other hyperparameters were tuned on the validation set.