Empowering Graph Representation Learning with Test-Time Graph Transformation
Authors: Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have demonstrated the effectiveness of GTRANS on three distinct scenarios for eight benchmark datasets where suboptimal data is presented. Remarkably, GTRANS performs the best in most cases with improvements up to 2.8%, 8.2% and 3.8% over the best baselines on three experimental settings. |
| Researcher Affiliation | Collaboration | Wei Jin1 , Tong Zhao2, Jiayuan Ding1, Yozen Liu2, Jiliang Tang1 and Neil Shah2 1Michigan State University 2Snap Inc. |
| Pseudocode | Yes | Furthermore, the algorithm of GTRANS is provided in Appendix B. (Appendix B shows 'Algorithm 1: GTRANS for Test-Time Graph Transformation') |
| Open Source Code | Yes | Code is released at https://github.com/Chandler Bang/GTrans. ... To ensure reproducibility of our experiments, we provide our source code at https://github. com/Chandler Bang/GTrans. |
| Open Datasets | Yes | For the evaluation on OOD data, we use the datasets provided by Wu et al. (2022a). The dataset statistics are shown in Table 5, which includes three distinct type of distribution shifts: (1) artificial transformation for Cora (Yang et al., 2016) and Amazon-Photo (Shchur et al., 2018), (2) cross-domain transfers for Twitch-E and FB-100 (Rozemberczki et al., 2021a) (Lim et al., 2021), and (3) temporal evolution for Elliptic (Pareja et al., 2020) and OGB-Arxiv (Hu et al., 2020). |
| Dataset Splits | Yes | Cora and Amazon-Photo have 1/1/8 graphs for training/validation/test sets. The splits are 1/1/5 on Twitch-E, 3/2/3 on FB-100, 5/5/33 on Elliptic, and 1/1/3 on OGB-Arxiv. |
| Hardware Specification | Yes | We perform experiments on NVIDIA Tesla V100 GPUs. The GPU memory and running time reported in Table 2 are measured on one single V100 GPU. Additionally, we use eight CPUs, with the model name as Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz. |
| Software Dependencies | No | The paper mentions 'The operating system we use is Cent OS Linux 7 (Core).' but does not provide specific version numbers for other key software components such as programming languages, libraries (e.g., PyTorch, TensorFlow), or scientific computing packages used in the experiments. |
| Experiment Setup | Yes | For the setup of GTRANS, we alternatively optimize node features for τ1 = 4 epochs and optimize graph structure τ2 = 1 epoch. We adopt Drop Edge as the augmentation function A( ) and set the drop ratio to 0.5. We use Adam optimizer for both feature learning and structure learning. We further search the learning rate of feature adaptation η1 in [5e-3, 1e-3, 1e-4, 1e-5, 1e-6], learning rate of structure adaptation η2 in [0.5, 0.1, 0.01], the modification budget B in [0.5%, 1%, 5%] of the original edges, total epochs T in [5, 10]. |