Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts
Authors: Zeyang Zhang, Xin Wang, Ziwei Zhang, Zhou Qin, Weigao Wen, Hui Xue', Haoyang Li, Wenwu Zhu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and real-world dynamic graph datasets demonstrate the superiority of our method for both node classification and link prediction tasks under distribution shifts. |
| Researcher Affiliation | Collaboration | Zeyang Zhang1 , Xin Wang1 , Ziwei Zhang1, Zhou Qin2, Weigao Wen2, Hui Xue2, Haoyang Li1, Wenwu Zhu1 1Department of Computer Science and Technology, BNRist, Tsinghua University, 2Alibaba Group |
| Pseudocode | Yes | The overall algorithm for training on node classification datasets is summarized in Algo. 1. Algorithm 1 Training pipeline for SILD on node classification datasets |
| Open Source Code | Yes | The codes are available at Github. |
| Open Datasets | Yes | We use 3 real-world dynamic graph datasets, including Collab [41, 7], Yelp [24, 7] and Aminer [42, 43]. |
| Dataset Splits | Yes | We train on papers published between 2001 2011, validate on those published in 2012 2014, and test on those published since 2015. ... For both datasets, we set the shift level parameters as 0.4, 0.6, 0.8 for training and validation splits, and 0 for test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions `λ` as a hyperparameter and describes dataset splitting strategies, but it does not provide specific concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configurations within the main text. It defers 'More details of the settings' to the Appendix. |