Topological Relational Learning on Graphs

Authors: Yuzhou Chen, Baris Coskunuzer, Yulia Gel

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on node classification tasks demonstrate that the new TRI-GNN outperforms all 14 state-of-the-art baselines on 6 out 7 graphs and exhibit higher robustness to perturbations, yielding up to 10% better performance under noisy scenarios.Our expansive node classification experiments show that TRI-GNN outperforms 14 state-of-the-art baselines on 6 out 7 graphs and delivers substantially higher robustness (i.e., up to 10% in performance gains under noisy scenarios) than baselines on all 7 datasets.We now empirically evaluate the effectiveness of our proposed method on seven node-classification benchmarks under semi-supervised setting with different graph size and feature type. We run all experiments for 50 times and report the average accuracy results and standard deviations.Table 1 shows the results for node classification results on graphs.
Researcher Affiliation Academia Yuzhou Chen Department of Electrical Engineering Princeton University yc0774@princeton.edu Baris Coskunuzer Department of Mathematical Sciences University of Texas at Dallas coskunuz@utdallas.edu Yulia R. Gel Department of Mathematical Sciences University of Texas at Dallas and National Science Foundation ygl@utdallas.edu
Pseudocode No The paper does not contain any sections or figures explicitly labeled as "Pseudocode" or "Algorithm".
Open Source Code Yes The source code of TRI-GNN is publicly available at https://github.com/TRI-GNN/TRI-GNN.git.
Open Datasets Yes Datasets We compare TRI-GNN with the state-of-the-art (SOA) baselines, using standard publicly available real and synthetic networks: (1) 3 citation networks [41]: Cora-ML, Cite Seer, and Pub Med, where nodes are publications and edges are citations; (2) 4 synthetic power grid networks [8, 7, 23]: IEEE 118-bus system, ACTIVSg200 system, ACTIVSg500 system, and ACTIVSg2000 system, where each node represents a load bus, transformer, or generator and we use total line charging susceptance (BR_B) as edge weight.
Dataset Splits No The provided text mentions
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., specific GPU or CPU models, memory, or cluster specifications).
Software Dependencies No The paper does not list specific software dependencies with their version numbers (e.g., programming languages, libraries, or frameworks with version details).
Experiment Setup Yes Selection of hyperparameters ϵ1 and ϵ2 can be performed by assessing quantiles of the empirical distribution of shape similarities and then cross-validation.For instance, the optimal quantile of ϵ1 and ϵ2 for ACTIVSg200 dataset is 0.55 and 2.50 respectively.For ϵ1, we generate a sequence from 0.50 to 2.00 with increment of the sequence 0.05; for ϵ2, we generate a sequence from 2.50 to 6.74 with increment of the sequence 0.5.In the experiments µ is selected from {0.1, 0.2, . . . , 0.9}...