Learning Steady-States of Iterative Algorithms over Graphs
Authors: Hanjun Dai, Zornitsa Kozareva, Bo Dai, Alex Smola, Le Song
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we experimentally demonstrate the effectiveness and efficiency of learning various graph algorithms. We compare our proposed algorithm with some of the GNN variants who have the fixed finite number of propagations T, using experiments with both transductive and inductive settings. |
| Researcher Affiliation | Collaboration | 1Georgia Institute of Technology 2Amazon 3Ant Financial. |
| Pseudocode | Yes | Algorithm 1 Learning with Stochastic Fixed Point Iteration |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We take the Blogcatalog and Pubmed graphs for evaluation (graph statistics can be found in Table 6 and Table 5 in Appendix). (Section 5.2) ... namely the Amazon product co-purchasing network dataset (Yang & Leskovec, 2015) (Section 5.4) ... We use the PPI dataset from Graph Sage Hamilton et al. (2017a) |
| Dataset Splits | Yes | In transductive setting, we reserve 10% nodes for held-out evaluation, and vary the training set size from 10% to 90% of the total nodes. (Section 5.2) ... We test the learned mean-field scores on the 10% of the vertices and vary the size of training set sampled from the remaining vertices. (Section 5.3) ... We use the same train/valid/test split as in Hamilton et al. (2017a). |
| Hardware Specification | Yes | All the algorithms are executed on a 16-core cluster with 256GB memory. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | The number of propagation steps is tuned in T {1,...,7} for them. (Section 5) ... For our proposed algorithm, We tune the number of inner loops for SGD and fixed point iterations nf,nh {1,5,8}, to balance the parameter learning and fixed point constraint satisfaction. (Section 5) ... In our experiment we use the default value (which is 0.85) for the damping factor. |