LanczosNet: Multi-Scale Deep Graph Convolutional Networks
Authors: Renjie Liao, Zhizhen Zhao, Raquel Urtasun, Richard Zemel
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We benchmark our model against several recent deep graph networks on citation networks and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks. |
| Researcher Affiliation | Collaboration | University of Toronto1, Uber ATG Toronto2, Vector Institute3, University of Illinois at Urbana-Champaign4, Canadian Institute for Advanced Research5 |
| Pseudocode | Yes | Algorithm 1 : Lanczos Algorithm |
| Open Source Code | Yes | We implement all methods using Py Torch [65] and release the code at https://github.com/lrjconan/Lanczos Network. |
| Open Datasets | Yes | We test them on two sets of tasks: (1) semi-supervised document classification on 3 citation networks [63], (2) supervised regression of molecule property on QM8 quantum chemistry dataset [64]. |
| Dataset Splits | Yes | We use the split provided by Deep Chem 2 which have 17428, 2179 and 2179 graphs for training, validation and testing respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper states 'We implement all methods using Py Torch [65]', but it does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | All methods are trained with Adam with learning rate 1.0e 2 and weight decay 5.0e 4. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. We tune hyperparameters using Cora alone and fix them for citeseer and pubmed. For convolution based methods, we found 2 layers work the best. In GCN-FP, we set the hidden dimension to 64 and dropout to 0.5. |