A Fractional Graph Laplacian Approach to Oversmoothing
Authors: Sohir Maskey, Raffaele Paolino, Aras Bacho, Gitta Kutyniok
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on synthetic and real-world graphs, both directed and undirected, demonstrating our method s versatility across diverse graph homophily levels. Our experimental results indicate the advantages offered by fractional graph Laplacians, particularly in non-homophilic and directed graphs. |
| Researcher Affiliation | Academia | Sohir Maskey Department of Mathematics, LMU Munich maskey@math.lmu.de Raffaele Paolino Department of Mathematics & MCML, LMU Munich paolino@math.lmu.de Aras Bacho Department of Mathematics, LMU Munich Gitta Kutyniok Department of Mathematics & MCML, LMU Munich |
| Pseudocode | Yes | Algorithm 1: fLode |
| Open Source Code | Yes | Our code is available on Git Hub. The code and instructions to reproduce the experiments are available on Git Hub. |
| Open Datasets | Yes | Real-World Graphs. We report results on 6 undirected datasets consisting of both homophilic graphs, i.e., Cora (Mc Callum et al., 2000), Citeseer (Sen et al., 2008) and Pubmed (Namata et al., 2012), and heterophilic graphs, i.e., Film (Tang et al., 2009), Squirrel and Chameleon (Rozemberczki et al., 2021). We evaluate our method on the directed and undirected versions of Squirrel, Film, and Chameleon. In all datasets, we use the standard 10 splits from (Pei et al., 2019). |
| Dataset Splits | Yes | For each split, 48% of the nodes are used for training, 32% for validation and 20% for testing. In all experiments, the training set contains 20 nodes per cluster, 500 nodes for validation, and the rest for testing. |
| Hardware Specification | Yes | All experiments were run on an internal cluster with NVIDIA GeForce RTX 2080 Ti and NVIDIA TITAN RTX GPUs with 16 and 24 GB of memory, respectively. |
| Software Dependencies | No | Our model is implemented in Py Torch (Paszke et al., 2019), using Py Torch geometric (Fey et al., 2019). The computation of the SVD for the fractional Laplacian is implemented using the library linalg provided by Py Torch. In the case of truncated SVD, we use the function randomized_svd provided by the library extmath from sklearn. The paper mentions software names and their associated publication years, but does not provide specific version numbers for PyTorch, PyTorch Geometric, linalg, or sklearn. |
| Experiment Setup | Yes | Hyperparameters were tuned using grid search. The exact hyperparameters for FLODE are provided in Table 5. Table 5 includes specific values for learning rate, weight decay, hidden channels, num. layers, encoder layers, decoder layers, input dropout, decoder dropout, exponent, step size, and Dirichlet energy. |