Graph Coarsening with Message-Passing Guarantees
Authors: Antonin Joly, Nicolas Keriven
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct node classification tasks on synthetic and real data and observe improved results compared to performing naive message-passing on the coarsened graph. |
| Researcher Affiliation | Academia | Antonin Joly IRISA, Rennes, France antonin.joly@inria.fr Nicolas Keriven CNRS, IRISA, Rennes, France nicolas.keriven@cnrs.fr |
| Pseudocode | Yes | Algorithm 1 Loukas Algorithm |
| Open Source Code | Yes | The code is available at https://gitlab.inria.fr/anjoly/mp-guarantees-graph-coarsening, and proofs are deferred to App. A. |
| Open Datasets | Yes | Node classification on real graphs. We then perform node classification experiments on real-world graphs, namely Cora [35] and Citeseer [15], using the public split from [43]. |
| Dataset Splits | Yes | Node classification on real graphs. We then perform node classification experiments on real-world graphs, namely Cora [35] and Citeseer [15], using the public split from [43]. |
| Hardware Specification | No | Additionally, large graphs may be too big to fit on GPUs, and mini-batching graph nodes is known to be a difficult graph sampling problem [14], which may no longer be required on a coarsened graph. |
| Software Dependencies | No | The paper mentions 'Code is included as supplementary material, and use only open-source Python libraries' in the NeurIPS checklist, but it does not specify concrete version numbers for any of these libraries or the Python interpreter itself within the main paper or its appendices. |
| Experiment Setup | Yes | B.5 Hyper-parameters for Table 1 and Table 2 For all experiments... We apply our adapted version of Loukas coarsening algorithm with ne = 5%N... For SGC cora and SGC Citeseer we make 6 propagations... For GCN Cora and Citeseer we use 2 convolationnal layer with a hidden dimension of 16. For all experiments we use an Adam Optimizer wit a learning rate of 0.05 and a weight decay of 0.01. |