TIDE: Time Derivative Diffusion for Deep Learning on Graphs

Authors: Maysam Behmanesh, Maximilian Krahn, Maks Ovsjanikov

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments We compare our methods to strong baselines on typical node classification benchmarks and present novel long-distance communication experiments.
Researcher Affiliation Academia 1LIX, Ecole polytechnique, IP Paris, France 2Aalto University, Finland.
Pseudocode No The paper includes a schematic view of the proposed TIDE network architecture (Figure 2) but does not contain a formal pseudocode block or algorithm listing.
Open Source Code Yes +Our implementation is available at https://github. com/maysambehmanesh/TIDE
Open Datasets Yes Table 1. Comparison of the accuracy of proposed models on several benchmarks with baseline methods (mean std). Model Cora Citeseer Pubmed Coauthor CS Computer Photo Ogbn-arxiv GCN (Kipf & Welling, 2017)... and Table 5. Datasets properties Graph #Nodes #Edges #Node featues #Class Avg. node deg. Graph diameter Label rate Cora 2485 5069 1433 7 4.07 1.53 0.056 Citeseer 2120 3679 3703 6 3.47 1.44 0.057 Pub Med 19717 44324 500 3 4.49 18 0.003 Coauthor CS 18333 81894 6805 15 8.93 24 0.016 Computers 13381 245778 767 10 36.73 0.16 0.015 Photos 7487 119043 745 8 31.79 0.23 0.021 Ogbn-arxiv 169343 1166243 128 40 13.67 23 1 and We build a synthetic dataset based on the FAUST (Bogo et al., 2014) collection of shapes represented as triangle meshes.
Dataset Splits Yes In single train and test nodes are on the same single graph,with a random 0.2, 0.3, 0.5 split for train, validation, and test, respectively.
Hardware Specification Yes The experiments are conducted on an NVIDIA A100 GPU with 40 GB of GPU memory.
Software Dependencies No The models are implemented in Py Torch, and the torch geometric library is also incorporated in addition to the standard Py Torch. To accelerate learning, GPU acceleration is utilized, while the diffusion operator and the gradient operator are preprocessed on a CPU using the Sci Py library. (No version numbers provided for PyTorch, torch geometric, or SciPy).
Experiment Setup Yes For a fair comparison, we set similar values for the common hyperparameters in all baselines. For this purpose, we use 64 channels in the hidden layer and a dropout probability of 0.5. All models are trained with a maximum of 500 epochs with a learning rate of 0.01.