Delaunay Graph: Addressing Over-Squashing and Over-Smoothing Using Delaunay Triangulation
Authors: Hugo Attali, Davide Buscaldi, Nathalie Pernelle
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experimentation demonstrates that our method consistently outperforms established graph rewiring methods. We conducted experiments on ten different datasets for the node classification task, comprising seven heterophilic datasets (Tang et al., 2009; Rozemberczki et al., 2021; Platonov et al., 2023) and three homophilic datasets (Sen et al., 2008). The dataset statistics are presented in Table 2. |
| Researcher Affiliation | Academia | 1LIPN, Universite Sorbonne Nord. Correspondence to: Hugo Attali <attali@lipn.univ-paris13.fr>. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Reproducibility. Our code to reproduce the experiments of the paper is available. 1Code available from: https://github.com/Hugo-Attali/Delaunay-Rewiring |
| Open Datasets | Yes | We conducted experiments on ten different datasets for the node classification task, comprising seven heterophilic datasets (Tang et al., 2009; Rozemberczki et al., 2021; Platonov et al., 2023) and three homophilic datasets (Sen et al., 2008). The dataset statistics are presented in Table 2. |
| Dataset Splits | Yes | For all graph datasets, we randomly sample 60% of nodes for training, allocate 20% for validation, and reserve another 20% for testing. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, cloud instance types) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow, or other libraries). |
| Experiment Setup | Yes | Hence, we set the number of layers to 2, dropout to 0.5, learning rate to 0.005, patience to 100 epochs, and weight decay to 5E 6 (Texas, Wisconsin and Cornell) or 5E 5 (other datasets). The number of hidden states is set to 32 (Texas/Wisconsin/Cornell), 48 (Squirrel, Chameleon and Roman-Empire), 32 (Actor), and 16 (Cora, Citeseer and Pubmed). |