GALOPA: Graph Transport Learning with Optimal Plan Alignment

Authors: Yejiang Wang, Yuhai Zhao, Daniel Zhengkui Wang, Ling Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental findings include: (i) The plan alignment strategy significantly outperforms the counterpart using the transport distance; (ii) The proposed model shows superior performance using only node attributes as calibration signals, without relying on edge information; (iii) Our model maintains robust results even under high perturbation rates; (iv) Extensive experiments on various benchmarks validate the effectiveness of the proposed method.
Researcher Affiliation Academia 1 School of Computer Science and Engineering, Northeastern University, China 2 Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, China 3 Info Comm Technology Cluster, Singapore Institute of Technology, Singapore
Pseudocode No The paper does not include a clearly labeled pseudocode or algorithm block. The methodology is described in prose within Section 4.
Open Source Code No The paper does not provide a specific link or explicit statement about releasing the source code for the described methodology.
Open Datasets Yes For node classification, we evaluate the performance of using the pretraining representations on 7 benchmark graph datasets, namely, CORA, CITESEER, PUBMED [25] and Wiki-CS, Amazon-Computers, Amazon-Photo, and Coauthor-CS [47]. For graph classification, we follow GRAPHCL [72] to perform evaluations on 6 graph classification data NCI1, PROTEINS, DD, MUTAG, COLLAB, and IMDB-B from TUDataset [36].
Dataset Splits Yes For the graphs (nodes) datasets, we randomly split the data, where 80%/10%/10% (10%/10%/80%) of graphs (nodes) are selected for the training, validation, and test set, respectively.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., specific GPU or CPU models, memory specifications).
Software Dependencies No The paper mentions that the model is "implemented with Pytorch Geometric [13] and Deep Graph Library [63]" but does not specify version numbers for these software components or any other dependencies.
Experiment Setup Yes In the experiments, we use the Adam optimizer [23] with learning rate is tuned in {0.0001, 0.001, 0.01}. We conduct the experiment with the trade-off parameter ρ and σ, the parameter C of SVM, batch size in the sets {10 3, 10 2, . . . , 102, 103}, {0, 0.1, . . . , 0.9, 1}, {10 3, . . . , 103}, {16, 64, 128, 256, 512}, respectively. To perform graph augmentation, we use 4 types of operations: Edge Perturbation, Feature Masking, Node Dropping, and Graph Sampling.