Directional Graph Networks
Authors: Dominique Beaini, Saro Passaro, Vincent Létourneau, Will Hamilton, Gabriele Corso, Pietro Lió
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on different standard benchmarks and see a relative error reduction of 8% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset, and a relative increase in precision of 1.6% on the Mol PCBA dataset.We tested our method on 5 standard datasets from (Dwivedi et al., 2020) and (Hu et al., 2020), using two types of architectures, and either using or ignoring edge features. In all cases, we observed state-of-the-art results from the proposed DGN, with relative improvements of 8% on CIFAR10, 11-32% on ZINC, 0.8% on Mol HIV and 1.6% on Mol PCBA. |
| Researcher Affiliation | Collaboration | Dominique Beaini * 1 Saro Passaro * 2 Vincent L etourneau 1 3 William L. Hamilton 3 4 Gabriele Corso 2 Pietro Li o 2*Equal contribution 1In Vivo AI, Montreal, Canada 2department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom 3MILA, Montreal, Canada 4Mc Gill University, Montreal, Canada. Correspondence to: Dominique Beaini <dominique@invivoai.com>, Saro Passaro <sp976@cam.ac.uk>. |
| Pseudocode | No | The paper includes diagrams (e.g., Figure 1) and mathematical equations to describe the methodology, but no structured pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | We implemented the models using the DGL and Py Torch libraries and we provide the code at the address https://github.com/Saro00/DGN. |
| Open Datasets | Yes | We test our method on standard benchmarks from (Dwivedi et al., 2020) and (Hu et al., 2020), namely ZINC, CIFAR10, PATTERN, Mol HIV and Mol PCBA with more details on the datasets and how we enforce a fair comparison in appendix C.1. |
| Dataset Splits | Yes | We test our method on standard benchmarks from (Dwivedi et al., 2020) and (Hu et al., 2020), namely ZINC, CIFAR10, PATTERN, Mol HIV and Mol PCBA with more details on the datasets and how we enforce a fair comparison in appendix C.1.δ is the average node degree in the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | We implemented the models using the DGL and Py Torch libraries and we provide the code at the address https://github.com/Saro00/DGN. (No version numbers specified for DGL or PyTorch). |
| Experiment Setup | Yes | For the empirical experiments we inserted our proposed aggregation method in two different type of message passing architectures used in the literature: a simple convolutional architecture similar to the one present in GCN (equation 9a)... and a more complex and general one typical of MPNNs (9b)...Figure 5. Test set results using a parameter budget of 100k with the same hyperparameters as (Corso et al., 2020), except Mol PCBA with a budget of 7M.Here, k is a hyperparameter, usually 1 or 2... |